00:00:00.000 Started by upstream project "autotest-per-patch" build number 127147 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.067 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.068 The recommended git tool is: git 00:00:00.068 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.104 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.166 Using shallow fetch with depth 1 00:00:00.166 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.166 > git --version # timeout=10 00:00:00.223 > git --version # 'git version 2.39.2' 00:00:00.223 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.255 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.255 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.673 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.687 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.699 Checking out Revision c396a3cd44e4090a57fb151c18fefbf4a9bd324b (FETCH_HEAD) 00:00:05.699 > git config core.sparsecheckout # timeout=10 00:00:05.711 > git read-tree -mu HEAD # timeout=10 00:00:05.728 > git checkout -f c396a3cd44e4090a57fb151c18fefbf4a9bd324b # timeout=5 00:00:05.748 Commit message: "jenkins/jjb-config: Use freebsd14 for the pkgdep-freebsd job" 00:00:05.748 > git rev-list --no-walk c396a3cd44e4090a57fb151c18fefbf4a9bd324b # timeout=10 00:00:05.851 [Pipeline] Start of Pipeline 00:00:05.866 [Pipeline] library 00:00:05.868 Loading library shm_lib@master 00:00:05.868 Library shm_lib@master is cached. Copying from home. 00:00:05.889 [Pipeline] node 00:00:05.900 Running on GP6 in /var/jenkins/workspace/crypto-phy-autotest 00:00:05.902 [Pipeline] { 00:00:05.913 [Pipeline] catchError 00:00:05.915 [Pipeline] { 00:00:05.928 [Pipeline] wrap 00:00:05.936 [Pipeline] { 00:00:05.944 [Pipeline] stage 00:00:05.945 [Pipeline] { (Prologue) 00:00:06.141 [Pipeline] sh 00:00:06.414 + logger -p user.info -t JENKINS-CI 00:00:06.427 [Pipeline] echo 00:00:06.428 Node: GP6 00:00:06.434 [Pipeline] sh 00:00:06.716 [Pipeline] setCustomBuildProperty 00:00:06.724 [Pipeline] echo 00:00:06.726 Cleanup processes 00:00:06.729 [Pipeline] sh 00:00:07.003 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:07.003 2202500 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:07.012 [Pipeline] sh 00:00:07.284 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:07.284 ++ grep -v 'sudo pgrep' 00:00:07.284 ++ awk '{print $1}' 00:00:07.284 + sudo kill -9 00:00:07.284 + true 00:00:07.312 [Pipeline] cleanWs 00:00:07.319 [WS-CLEANUP] Deleting project workspace... 00:00:07.319 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.323 [WS-CLEANUP] done 00:00:07.325 [Pipeline] setCustomBuildProperty 00:00:07.336 [Pipeline] sh 00:00:07.607 + sudo git config --global --replace-all safe.directory '*' 00:00:07.683 [Pipeline] httpRequest 00:00:07.714 [Pipeline] echo 00:00:07.716 Sorcerer 10.211.164.101 is alive 00:00:07.727 [Pipeline] httpRequest 00:00:07.734 HttpMethod: GET 00:00:07.734 URL: http://10.211.164.101/packages/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:07.735 Sending request to url: http://10.211.164.101/packages/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:07.753 Response Code: HTTP/1.1 200 OK 00:00:07.754 Success: Status code 200 is in the accepted range: 200,404 00:00:07.754 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:16.998 [Pipeline] sh 00:00:17.278 + tar --no-same-owner -xf jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:17.294 [Pipeline] httpRequest 00:00:17.322 [Pipeline] echo 00:00:17.323 Sorcerer 10.211.164.101 is alive 00:00:17.332 [Pipeline] httpRequest 00:00:17.337 HttpMethod: GET 00:00:17.337 URL: http://10.211.164.101/packages/spdk_6f18624d4dad6e4ce0db8ef9c88f9af541785fdd.tar.gz 00:00:17.338 Sending request to url: http://10.211.164.101/packages/spdk_6f18624d4dad6e4ce0db8ef9c88f9af541785fdd.tar.gz 00:00:17.363 Response Code: HTTP/1.1 200 OK 00:00:17.363 Success: Status code 200 is in the accepted range: 200,404 00:00:17.364 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_6f18624d4dad6e4ce0db8ef9c88f9af541785fdd.tar.gz 00:01:30.128 [Pipeline] sh 00:01:30.405 + tar --no-same-owner -xf spdk_6f18624d4dad6e4ce0db8ef9c88f9af541785fdd.tar.gz 00:01:32.947 [Pipeline] sh 00:01:33.234 + git -C spdk log --oneline -n5 00:01:33.234 6f18624d4 python/rpc: Python rpc call generator. 00:01:33.234 da8d49b2f python/rpc: Replace bdev.py with generated rpc's 00:01:33.234 8711e7e9b autotest: reduce accel tests runs with SPDK_TEST_ACCEL flag 00:01:33.234 50222f810 configure: don't exit on non Intel platforms 00:01:33.234 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:01:33.247 [Pipeline] } 00:01:33.268 [Pipeline] // stage 00:01:33.280 [Pipeline] stage 00:01:33.281 [Pipeline] { (Prepare) 00:01:33.297 [Pipeline] writeFile 00:01:33.313 [Pipeline] sh 00:01:33.594 + logger -p user.info -t JENKINS-CI 00:01:33.606 [Pipeline] sh 00:01:33.884 + logger -p user.info -t JENKINS-CI 00:01:33.895 [Pipeline] sh 00:01:34.175 + cat autorun-spdk.conf 00:01:34.175 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.175 SPDK_TEST_BLOCKDEV=1 00:01:34.175 SPDK_TEST_ISAL=1 00:01:34.175 SPDK_TEST_CRYPTO=1 00:01:34.175 SPDK_TEST_REDUCE=1 00:01:34.175 SPDK_TEST_VBDEV_COMPRESS=1 00:01:34.175 SPDK_RUN_UBSAN=1 00:01:34.175 SPDK_TEST_ACCEL=1 00:01:34.181 RUN_NIGHTLY=0 00:01:34.185 [Pipeline] readFile 00:01:34.209 [Pipeline] withEnv 00:01:34.210 [Pipeline] { 00:01:34.228 [Pipeline] sh 00:01:34.573 + set -ex 00:01:34.573 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:01:34.573 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:01:34.573 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.573 ++ SPDK_TEST_BLOCKDEV=1 00:01:34.573 ++ SPDK_TEST_ISAL=1 00:01:34.573 ++ SPDK_TEST_CRYPTO=1 00:01:34.574 ++ SPDK_TEST_REDUCE=1 00:01:34.574 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:01:34.574 ++ SPDK_RUN_UBSAN=1 00:01:34.574 ++ SPDK_TEST_ACCEL=1 00:01:34.574 ++ RUN_NIGHTLY=0 00:01:34.574 + case $SPDK_TEST_NVMF_NICS in 00:01:34.574 + DRIVERS= 00:01:34.574 + [[ -n '' ]] 00:01:34.574 + exit 0 00:01:34.583 [Pipeline] } 00:01:34.602 [Pipeline] // withEnv 00:01:34.608 [Pipeline] } 00:01:34.625 [Pipeline] // stage 00:01:34.635 [Pipeline] catchError 00:01:34.637 [Pipeline] { 00:01:34.654 [Pipeline] timeout 00:01:34.654 Timeout set to expire in 1 hr 0 min 00:01:34.656 [Pipeline] { 00:01:34.673 [Pipeline] stage 00:01:34.675 [Pipeline] { (Tests) 00:01:34.689 [Pipeline] sh 00:01:34.970 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:01:34.970 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:01:34.970 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:01:34.970 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:01:34.970 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:34.970 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:01:34.970 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:01:34.970 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:01:34.970 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:01:34.970 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:01:34.970 + [[ crypto-phy-autotest == pkgdep-* ]] 00:01:34.970 + cd /var/jenkins/workspace/crypto-phy-autotest 00:01:34.970 + source /etc/os-release 00:01:34.970 ++ NAME='Fedora Linux' 00:01:34.970 ++ VERSION='38 (Cloud Edition)' 00:01:34.970 ++ ID=fedora 00:01:34.970 ++ VERSION_ID=38 00:01:34.970 ++ VERSION_CODENAME= 00:01:34.970 ++ PLATFORM_ID=platform:f38 00:01:34.970 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:34.971 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:34.971 ++ LOGO=fedora-logo-icon 00:01:34.971 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:34.971 ++ HOME_URL=https://fedoraproject.org/ 00:01:34.971 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:34.971 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:34.971 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:34.971 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:34.971 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:34.971 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:34.971 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:34.971 ++ SUPPORT_END=2024-05-14 00:01:34.971 ++ VARIANT='Cloud Edition' 00:01:34.971 ++ VARIANT_ID=cloud 00:01:34.971 + uname -a 00:01:34.971 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:34.971 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:01:35.905 Hugepages 00:01:35.905 node hugesize free / total 00:01:35.905 node0 1048576kB 0 / 0 00:01:35.905 node0 2048kB 0 / 0 00:01:35.905 node1 1048576kB 0 / 0 00:01:35.905 node1 2048kB 0 / 0 00:01:35.905 00:01:35.905 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:35.905 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:35.905 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:35.905 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:35.905 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:36.164 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:36.164 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:36.164 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:36.164 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:36.164 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:36.164 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:36.164 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:36.164 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:36.164 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:36.164 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:36.164 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:36.164 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:36.164 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:36.164 + rm -f /tmp/spdk-ld-path 00:01:36.164 + source autorun-spdk.conf 00:01:36.164 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.164 ++ SPDK_TEST_BLOCKDEV=1 00:01:36.164 ++ SPDK_TEST_ISAL=1 00:01:36.164 ++ SPDK_TEST_CRYPTO=1 00:01:36.164 ++ SPDK_TEST_REDUCE=1 00:01:36.164 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:01:36.164 ++ SPDK_RUN_UBSAN=1 00:01:36.164 ++ SPDK_TEST_ACCEL=1 00:01:36.164 ++ RUN_NIGHTLY=0 00:01:36.164 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:36.164 + [[ -n '' ]] 00:01:36.164 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:36.164 + for M in /var/spdk/build-*-manifest.txt 00:01:36.164 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:36.164 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:01:36.164 + for M in /var/spdk/build-*-manifest.txt 00:01:36.164 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:36.164 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:01:36.164 ++ uname 00:01:36.164 + [[ Linux == \L\i\n\u\x ]] 00:01:36.164 + sudo dmesg -T 00:01:36.164 + sudo dmesg --clear 00:01:36.164 + dmesg_pid=2203188 00:01:36.164 + [[ Fedora Linux == FreeBSD ]] 00:01:36.164 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.164 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.164 + sudo dmesg -Tw 00:01:36.164 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:36.164 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:36.164 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:36.164 + [[ -x /usr/src/fio-static/fio ]] 00:01:36.164 + export FIO_BIN=/usr/src/fio-static/fio 00:01:36.164 + FIO_BIN=/usr/src/fio-static/fio 00:01:36.164 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:36.164 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:36.164 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:36.164 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.164 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.164 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:36.164 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.164 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.164 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:01:36.164 Test configuration: 00:01:36.164 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.164 SPDK_TEST_BLOCKDEV=1 00:01:36.164 SPDK_TEST_ISAL=1 00:01:36.164 SPDK_TEST_CRYPTO=1 00:01:36.164 SPDK_TEST_REDUCE=1 00:01:36.164 SPDK_TEST_VBDEV_COMPRESS=1 00:01:36.164 SPDK_RUN_UBSAN=1 00:01:36.164 SPDK_TEST_ACCEL=1 00:01:36.164 RUN_NIGHTLY=0 10:16:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:01:36.164 10:16:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:36.164 10:16:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:36.164 10:16:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:36.164 10:16:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.165 10:16:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.165 10:16:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.165 10:16:39 -- paths/export.sh@5 -- $ export PATH 00:01:36.165 10:16:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.165 10:16:39 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:01:36.165 10:16:39 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:36.165 10:16:39 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721895399.XXXXXX 00:01:36.165 10:16:39 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721895399.rByN0n 00:01:36.165 10:16:39 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:36.165 10:16:39 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:36.165 10:16:39 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:01:36.165 10:16:39 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:36.165 10:16:39 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:36.165 10:16:39 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:36.165 10:16:39 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:36.165 10:16:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.165 10:16:39 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:01:36.165 10:16:39 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:36.165 10:16:39 -- pm/common@17 -- $ local monitor 00:01:36.165 10:16:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.165 10:16:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.165 10:16:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.165 10:16:39 -- pm/common@21 -- $ date +%s 00:01:36.165 10:16:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.165 10:16:39 -- pm/common@21 -- $ date +%s 00:01:36.165 10:16:39 -- pm/common@25 -- $ sleep 1 00:01:36.165 10:16:39 -- pm/common@21 -- $ date +%s 00:01:36.165 10:16:39 -- pm/common@21 -- $ date +%s 00:01:36.165 10:16:39 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721895399 00:01:36.165 10:16:39 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721895399 00:01:36.165 10:16:39 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721895399 00:01:36.165 10:16:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721895399 00:01:36.424 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721895399_collect-vmstat.pm.log 00:01:36.424 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721895399_collect-cpu-load.pm.log 00:01:36.424 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721895399_collect-cpu-temp.pm.log 00:01:36.424 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721895399_collect-bmc-pm.bmc.pm.log 00:01:36.424 Traceback (most recent call last): 00:01:36.424 File "/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:01:36.424 import spdk.rpc as rpc # noqa 00:01:36.424 ^^^^^^^^^^^^^^^^^^^^^^ 00:01:36.424 File "/var/jenkins/workspace/crypto-phy-autotest/spdk/python/spdk/rpc/__init__.py", line 13, in 00:01:36.424 from . import bdev 00:01:36.424 File "/var/jenkins/workspace/crypto-phy-autotest/spdk/python/spdk/rpc/bdev.py", line 8, in 00:01:36.424 from spdk.rpc.rpc import * 00:01:36.424 ModuleNotFoundError: No module named 'spdk.rpc.rpc' 00:01:37.359 10:16:40 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:37.359 10:16:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:37.359 10:16:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:37.359 10:16:40 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:37.359 10:16:40 -- spdk/autobuild.sh@16 -- $ date -u 00:01:37.359 Thu Jul 25 08:16:40 AM UTC 2024 00:01:37.359 10:16:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:37.359 v24.09-pre-313-g6f18624d4 00:01:37.359 10:16:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:37.359 10:16:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:37.359 10:16:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:37.359 10:16:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:37.359 10:16:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:37.359 10:16:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.359 ************************************ 00:01:37.359 START TEST ubsan 00:01:37.359 ************************************ 00:01:37.359 10:16:40 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:37.359 using ubsan 00:01:37.359 00:01:37.359 real 0m0.000s 00:01:37.359 user 0m0.000s 00:01:37.359 sys 0m0.000s 00:01:37.359 10:16:40 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:37.359 10:16:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:37.359 ************************************ 00:01:37.359 END TEST ubsan 00:01:37.359 ************************************ 00:01:37.359 10:16:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:37.359 10:16:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:37.359 10:16:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:37.359 10:16:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:37.359 10:16:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:37.359 10:16:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:37.359 10:16:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:37.359 10:16:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:37.359 10:16:40 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:37.359 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:01:37.359 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:37.617 Using 'verbs' RDMA provider 00:01:48.524 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:58.506 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:58.506 Creating mk/config.mk...done. 00:01:58.506 Creating mk/cc.flags.mk...done. 00:01:58.506 Type 'make' to build. 00:01:58.506 10:17:01 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:58.506 10:17:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:58.506 10:17:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:58.506 10:17:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.506 ************************************ 00:01:58.506 START TEST make 00:01:58.506 ************************************ 00:01:58.506 10:17:01 make -- common/autotest_common.sh@1125 -- $ make -j48 00:02:45.240 The Meson build system 00:02:45.240 Version: 1.3.1 00:02:45.240 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:02:45.240 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:02:45.240 Build type: native build 00:02:45.240 Program cat found: YES (/usr/bin/cat) 00:02:45.240 Project name: DPDK 00:02:45.240 Project version: 24.03.0 00:02:45.240 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:45.240 C linker for the host machine: cc ld.bfd 2.39-16 00:02:45.240 Host machine cpu family: x86_64 00:02:45.240 Host machine cpu: x86_64 00:02:45.240 Message: ## Building in Developer Mode ## 00:02:45.240 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:45.240 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:45.240 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:45.240 Program python3 found: YES (/usr/bin/python3) 00:02:45.240 Program cat found: YES (/usr/bin/cat) 00:02:45.240 Compiler for C supports arguments -march=native: YES 00:02:45.241 Checking for size of "void *" : 8 00:02:45.241 Checking for size of "void *" : 8 (cached) 00:02:45.241 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:45.241 Library m found: YES 00:02:45.241 Library numa found: YES 00:02:45.241 Has header "numaif.h" : YES 00:02:45.241 Library fdt found: NO 00:02:45.241 Library execinfo found: NO 00:02:45.241 Has header "execinfo.h" : YES 00:02:45.241 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:45.241 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:45.241 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:45.241 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:45.241 Run-time dependency openssl found: YES 3.0.9 00:02:45.241 Run-time dependency libpcap found: YES 1.10.4 00:02:45.241 Has header "pcap.h" with dependency libpcap: YES 00:02:45.241 Compiler for C supports arguments -Wcast-qual: YES 00:02:45.241 Compiler for C supports arguments -Wdeprecated: YES 00:02:45.241 Compiler for C supports arguments -Wformat: YES 00:02:45.241 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:45.241 Compiler for C supports arguments -Wformat-security: NO 00:02:45.241 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:45.241 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:45.241 Compiler for C supports arguments -Wnested-externs: YES 00:02:45.241 Compiler for C supports arguments -Wold-style-definition: YES 00:02:45.241 Compiler for C supports arguments -Wpointer-arith: YES 00:02:45.241 Compiler for C supports arguments -Wsign-compare: YES 00:02:45.241 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:45.241 Compiler for C supports arguments -Wundef: YES 00:02:45.241 Compiler for C supports arguments -Wwrite-strings: YES 00:02:45.241 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:45.241 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:45.241 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:45.241 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:45.241 Program objdump found: YES (/usr/bin/objdump) 00:02:45.241 Compiler for C supports arguments -mavx512f: YES 00:02:45.241 Checking if "AVX512 checking" compiles: YES 00:02:45.241 Fetching value of define "__SSE4_2__" : 1 00:02:45.241 Fetching value of define "__AES__" : 1 00:02:45.241 Fetching value of define "__AVX__" : 1 00:02:45.241 Fetching value of define "__AVX2__" : (undefined) 00:02:45.241 Fetching value of define "__AVX512BW__" : (undefined) 00:02:45.241 Fetching value of define "__AVX512CD__" : (undefined) 00:02:45.241 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:45.241 Fetching value of define "__AVX512F__" : (undefined) 00:02:45.241 Fetching value of define "__AVX512VL__" : (undefined) 00:02:45.241 Fetching value of define "__PCLMUL__" : 1 00:02:45.241 Fetching value of define "__RDRND__" : 1 00:02:45.241 Fetching value of define "__RDSEED__" : (undefined) 00:02:45.241 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:45.241 Fetching value of define "__znver1__" : (undefined) 00:02:45.241 Fetching value of define "__znver2__" : (undefined) 00:02:45.241 Fetching value of define "__znver3__" : (undefined) 00:02:45.241 Fetching value of define "__znver4__" : (undefined) 00:02:45.241 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:45.241 Message: lib/log: Defining dependency "log" 00:02:45.241 Message: lib/kvargs: Defining dependency "kvargs" 00:02:45.241 Message: lib/telemetry: Defining dependency "telemetry" 00:02:45.241 Checking for function "getentropy" : NO 00:02:45.241 Message: lib/eal: Defining dependency "eal" 00:02:45.241 Message: lib/ring: Defining dependency "ring" 00:02:45.241 Message: lib/rcu: Defining dependency "rcu" 00:02:45.241 Message: lib/mempool: Defining dependency "mempool" 00:02:45.241 Message: lib/mbuf: Defining dependency "mbuf" 00:02:45.241 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:45.241 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:45.241 Compiler for C supports arguments -mpclmul: YES 00:02:45.241 Compiler for C supports arguments -maes: YES 00:02:45.241 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:45.241 Compiler for C supports arguments -mavx512bw: YES 00:02:45.241 Compiler for C supports arguments -mavx512dq: YES 00:02:45.241 Compiler for C supports arguments -mavx512vl: YES 00:02:45.241 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:45.241 Compiler for C supports arguments -mavx2: YES 00:02:45.241 Compiler for C supports arguments -mavx: YES 00:02:45.241 Message: lib/net: Defining dependency "net" 00:02:45.241 Message: lib/meter: Defining dependency "meter" 00:02:45.241 Message: lib/ethdev: Defining dependency "ethdev" 00:02:45.241 Message: lib/pci: Defining dependency "pci" 00:02:45.241 Message: lib/cmdline: Defining dependency "cmdline" 00:02:45.241 Message: lib/hash: Defining dependency "hash" 00:02:45.241 Message: lib/timer: Defining dependency "timer" 00:02:45.241 Message: lib/compressdev: Defining dependency "compressdev" 00:02:45.241 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:45.241 Message: lib/dmadev: Defining dependency "dmadev" 00:02:45.241 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:45.241 Message: lib/power: Defining dependency "power" 00:02:45.241 Message: lib/reorder: Defining dependency "reorder" 00:02:45.241 Message: lib/security: Defining dependency "security" 00:02:45.241 Has header "linux/userfaultfd.h" : YES 00:02:45.241 Has header "linux/vduse.h" : YES 00:02:45.241 Message: lib/vhost: Defining dependency "vhost" 00:02:45.241 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:45.241 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:02:45.241 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:45.241 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:45.241 Compiler for C supports arguments -std=c11: YES 00:02:45.241 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:02:45.241 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:02:45.241 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:02:45.241 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:02:45.241 Run-time dependency libmlx5 found: YES 1.24.44.0 00:02:45.241 Run-time dependency libibverbs found: YES 1.14.44.0 00:02:45.241 Library mtcr_ul found: NO 00:02:45.241 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:02:45.241 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:02:45.241 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:45.241 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:02:45.242 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:02:45.242 Configuring mlx5_autoconf.h using configuration 00:02:45.242 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:02:45.242 Run-time dependency libcrypto found: YES 3.0.9 00:02:45.242 Library IPSec_MB found: YES 00:02:45.242 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:02:45.242 Message: drivers/common/qat: Defining dependency "common_qat" 00:02:45.242 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:45.242 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:45.242 Library IPSec_MB found: YES 00:02:45.242 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:02:45.242 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:02:45.242 Compiler for C supports arguments -std=c11: YES (cached) 00:02:45.242 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:02:45.242 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:02:45.242 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:02:45.242 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:02:45.242 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:02:45.242 Run-time dependency libisal found: NO (tried pkgconfig) 00:02:45.242 Library libisal found: NO 00:02:45.242 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:02:45.242 Compiler for C supports arguments -std=c11: YES (cached) 00:02:45.242 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:02:45.242 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:02:45.242 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:02:45.242 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:02:45.242 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:02:45.242 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:45.242 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:45.242 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:45.242 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:45.242 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:45.242 Program doxygen found: YES (/usr/bin/doxygen) 00:02:45.242 Configuring doxy-api-html.conf using configuration 00:02:45.242 Configuring doxy-api-man.conf using configuration 00:02:45.242 Program mandb found: YES (/usr/bin/mandb) 00:02:45.242 Program sphinx-build found: NO 00:02:45.242 Configuring rte_build_config.h using configuration 00:02:45.242 Message: 00:02:45.242 ================= 00:02:45.242 Applications Enabled 00:02:45.242 ================= 00:02:45.242 00:02:45.242 apps: 00:02:45.242 00:02:45.242 00:02:45.242 Message: 00:02:45.242 ================= 00:02:45.242 Libraries Enabled 00:02:45.242 ================= 00:02:45.242 00:02:45.242 libs: 00:02:45.242 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:45.242 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:45.242 cryptodev, dmadev, power, reorder, security, vhost, 00:02:45.242 00:02:45.242 Message: 00:02:45.242 =============== 00:02:45.242 Drivers Enabled 00:02:45.242 =============== 00:02:45.242 00:02:45.242 common: 00:02:45.242 mlx5, qat, 00:02:45.242 bus: 00:02:45.242 auxiliary, pci, vdev, 00:02:45.242 mempool: 00:02:45.242 ring, 00:02:45.242 dma: 00:02:45.242 00:02:45.242 net: 00:02:45.242 00:02:45.242 crypto: 00:02:45.242 ipsec_mb, mlx5, 00:02:45.242 compress: 00:02:45.242 isal, mlx5, 00:02:45.242 vdpa: 00:02:45.242 00:02:45.242 00:02:45.242 Message: 00:02:45.242 ================= 00:02:45.242 Content Skipped 00:02:45.242 ================= 00:02:45.242 00:02:45.242 apps: 00:02:45.242 dumpcap: explicitly disabled via build config 00:02:45.242 graph: explicitly disabled via build config 00:02:45.242 pdump: explicitly disabled via build config 00:02:45.242 proc-info: explicitly disabled via build config 00:02:45.242 test-acl: explicitly disabled via build config 00:02:45.242 test-bbdev: explicitly disabled via build config 00:02:45.242 test-cmdline: explicitly disabled via build config 00:02:45.242 test-compress-perf: explicitly disabled via build config 00:02:45.242 test-crypto-perf: explicitly disabled via build config 00:02:45.242 test-dma-perf: explicitly disabled via build config 00:02:45.242 test-eventdev: explicitly disabled via build config 00:02:45.242 test-fib: explicitly disabled via build config 00:02:45.242 test-flow-perf: explicitly disabled via build config 00:02:45.242 test-gpudev: explicitly disabled via build config 00:02:45.242 test-mldev: explicitly disabled via build config 00:02:45.242 test-pipeline: explicitly disabled via build config 00:02:45.242 test-pmd: explicitly disabled via build config 00:02:45.242 test-regex: explicitly disabled via build config 00:02:45.242 test-sad: explicitly disabled via build config 00:02:45.242 test-security-perf: explicitly disabled via build config 00:02:45.242 00:02:45.242 libs: 00:02:45.242 argparse: explicitly disabled via build config 00:02:45.242 metrics: explicitly disabled via build config 00:02:45.242 acl: explicitly disabled via build config 00:02:45.242 bbdev: explicitly disabled via build config 00:02:45.242 bitratestats: explicitly disabled via build config 00:02:45.242 bpf: explicitly disabled via build config 00:02:45.242 cfgfile: explicitly disabled via build config 00:02:45.242 distributor: explicitly disabled via build config 00:02:45.242 efd: explicitly disabled via build config 00:02:45.242 eventdev: explicitly disabled via build config 00:02:45.242 dispatcher: explicitly disabled via build config 00:02:45.242 gpudev: explicitly disabled via build config 00:02:45.242 gro: explicitly disabled via build config 00:02:45.242 gso: explicitly disabled via build config 00:02:45.242 ip_frag: explicitly disabled via build config 00:02:45.242 jobstats: explicitly disabled via build config 00:02:45.242 latencystats: explicitly disabled via build config 00:02:45.242 lpm: explicitly disabled via build config 00:02:45.242 member: explicitly disabled via build config 00:02:45.242 pcapng: explicitly disabled via build config 00:02:45.242 rawdev: explicitly disabled via build config 00:02:45.242 regexdev: explicitly disabled via build config 00:02:45.242 mldev: explicitly disabled via build config 00:02:45.242 rib: explicitly disabled via build config 00:02:45.242 sched: explicitly disabled via build config 00:02:45.242 stack: explicitly disabled via build config 00:02:45.242 ipsec: explicitly disabled via build config 00:02:45.242 pdcp: explicitly disabled via build config 00:02:45.242 fib: explicitly disabled via build config 00:02:45.242 port: explicitly disabled via build config 00:02:45.242 pdump: explicitly disabled via build config 00:02:45.242 table: explicitly disabled via build config 00:02:45.242 pipeline: explicitly disabled via build config 00:02:45.242 graph: explicitly disabled via build config 00:02:45.242 node: explicitly disabled via build config 00:02:45.242 00:02:45.242 drivers: 00:02:45.242 common/cpt: not in enabled drivers build config 00:02:45.242 common/dpaax: not in enabled drivers build config 00:02:45.242 common/iavf: not in enabled drivers build config 00:02:45.242 common/idpf: not in enabled drivers build config 00:02:45.242 common/ionic: not in enabled drivers build config 00:02:45.242 common/mvep: not in enabled drivers build config 00:02:45.242 common/octeontx: not in enabled drivers build config 00:02:45.242 bus/cdx: not in enabled drivers build config 00:02:45.242 bus/dpaa: not in enabled drivers build config 00:02:45.242 bus/fslmc: not in enabled drivers build config 00:02:45.242 bus/ifpga: not in enabled drivers build config 00:02:45.242 bus/platform: not in enabled drivers build config 00:02:45.242 bus/uacce: not in enabled drivers build config 00:02:45.242 bus/vmbus: not in enabled drivers build config 00:02:45.243 common/cnxk: not in enabled drivers build config 00:02:45.243 common/nfp: not in enabled drivers build config 00:02:45.243 common/nitrox: not in enabled drivers build config 00:02:45.243 common/sfc_efx: not in enabled drivers build config 00:02:45.243 mempool/bucket: not in enabled drivers build config 00:02:45.243 mempool/cnxk: not in enabled drivers build config 00:02:45.243 mempool/dpaa: not in enabled drivers build config 00:02:45.243 mempool/dpaa2: not in enabled drivers build config 00:02:45.243 mempool/octeontx: not in enabled drivers build config 00:02:45.243 mempool/stack: not in enabled drivers build config 00:02:45.243 dma/cnxk: not in enabled drivers build config 00:02:45.243 dma/dpaa: not in enabled drivers build config 00:02:45.243 dma/dpaa2: not in enabled drivers build config 00:02:45.243 dma/hisilicon: not in enabled drivers build config 00:02:45.243 dma/idxd: not in enabled drivers build config 00:02:45.243 dma/ioat: not in enabled drivers build config 00:02:45.243 dma/skeleton: not in enabled drivers build config 00:02:45.243 net/af_packet: not in enabled drivers build config 00:02:45.243 net/af_xdp: not in enabled drivers build config 00:02:45.243 net/ark: not in enabled drivers build config 00:02:45.243 net/atlantic: not in enabled drivers build config 00:02:45.243 net/avp: not in enabled drivers build config 00:02:45.243 net/axgbe: not in enabled drivers build config 00:02:45.243 net/bnx2x: not in enabled drivers build config 00:02:45.243 net/bnxt: not in enabled drivers build config 00:02:45.243 net/bonding: not in enabled drivers build config 00:02:45.243 net/cnxk: not in enabled drivers build config 00:02:45.243 net/cpfl: not in enabled drivers build config 00:02:45.243 net/cxgbe: not in enabled drivers build config 00:02:45.243 net/dpaa: not in enabled drivers build config 00:02:45.243 net/dpaa2: not in enabled drivers build config 00:02:45.243 net/e1000: not in enabled drivers build config 00:02:45.243 net/ena: not in enabled drivers build config 00:02:45.243 net/enetc: not in enabled drivers build config 00:02:45.243 net/enetfec: not in enabled drivers build config 00:02:45.243 net/enic: not in enabled drivers build config 00:02:45.243 net/failsafe: not in enabled drivers build config 00:02:45.243 net/fm10k: not in enabled drivers build config 00:02:45.243 net/gve: not in enabled drivers build config 00:02:45.243 net/hinic: not in enabled drivers build config 00:02:45.243 net/hns3: not in enabled drivers build config 00:02:45.243 net/i40e: not in enabled drivers build config 00:02:45.243 net/iavf: not in enabled drivers build config 00:02:45.243 net/ice: not in enabled drivers build config 00:02:45.243 net/idpf: not in enabled drivers build config 00:02:45.243 net/igc: not in enabled drivers build config 00:02:45.243 net/ionic: not in enabled drivers build config 00:02:45.243 net/ipn3ke: not in enabled drivers build config 00:02:45.243 net/ixgbe: not in enabled drivers build config 00:02:45.243 net/mana: not in enabled drivers build config 00:02:45.243 net/memif: not in enabled drivers build config 00:02:45.243 net/mlx4: not in enabled drivers build config 00:02:45.243 net/mlx5: not in enabled drivers build config 00:02:45.243 net/mvneta: not in enabled drivers build config 00:02:45.243 net/mvpp2: not in enabled drivers build config 00:02:45.243 net/netvsc: not in enabled drivers build config 00:02:45.243 net/nfb: not in enabled drivers build config 00:02:45.243 net/nfp: not in enabled drivers build config 00:02:45.243 net/ngbe: not in enabled drivers build config 00:02:45.243 net/null: not in enabled drivers build config 00:02:45.243 net/octeontx: not in enabled drivers build config 00:02:45.243 net/octeon_ep: not in enabled drivers build config 00:02:45.243 net/pcap: not in enabled drivers build config 00:02:45.243 net/pfe: not in enabled drivers build config 00:02:45.243 net/qede: not in enabled drivers build config 00:02:45.243 net/ring: not in enabled drivers build config 00:02:45.243 net/sfc: not in enabled drivers build config 00:02:45.243 net/softnic: not in enabled drivers build config 00:02:45.243 net/tap: not in enabled drivers build config 00:02:45.243 net/thunderx: not in enabled drivers build config 00:02:45.243 net/txgbe: not in enabled drivers build config 00:02:45.243 net/vdev_netvsc: not in enabled drivers build config 00:02:45.243 net/vhost: not in enabled drivers build config 00:02:45.243 net/virtio: not in enabled drivers build config 00:02:45.243 net/vmxnet3: not in enabled drivers build config 00:02:45.243 raw/*: missing internal dependency, "rawdev" 00:02:45.243 crypto/armv8: not in enabled drivers build config 00:02:45.243 crypto/bcmfs: not in enabled drivers build config 00:02:45.243 crypto/caam_jr: not in enabled drivers build config 00:02:45.243 crypto/ccp: not in enabled drivers build config 00:02:45.243 crypto/cnxk: not in enabled drivers build config 00:02:45.243 crypto/dpaa_sec: not in enabled drivers build config 00:02:45.243 crypto/dpaa2_sec: not in enabled drivers build config 00:02:45.243 crypto/mvsam: not in enabled drivers build config 00:02:45.243 crypto/nitrox: not in enabled drivers build config 00:02:45.243 crypto/null: not in enabled drivers build config 00:02:45.243 crypto/octeontx: not in enabled drivers build config 00:02:45.243 crypto/openssl: not in enabled drivers build config 00:02:45.243 crypto/scheduler: not in enabled drivers build config 00:02:45.243 crypto/uadk: not in enabled drivers build config 00:02:45.243 crypto/virtio: not in enabled drivers build config 00:02:45.243 compress/nitrox: not in enabled drivers build config 00:02:45.243 compress/octeontx: not in enabled drivers build config 00:02:45.243 compress/zlib: not in enabled drivers build config 00:02:45.243 regex/*: missing internal dependency, "regexdev" 00:02:45.243 ml/*: missing internal dependency, "mldev" 00:02:45.243 vdpa/ifc: not in enabled drivers build config 00:02:45.243 vdpa/mlx5: not in enabled drivers build config 00:02:45.243 vdpa/nfp: not in enabled drivers build config 00:02:45.243 vdpa/sfc: not in enabled drivers build config 00:02:45.243 event/*: missing internal dependency, "eventdev" 00:02:45.243 baseband/*: missing internal dependency, "bbdev" 00:02:45.243 gpu/*: missing internal dependency, "gpudev" 00:02:45.243 00:02:45.243 00:02:45.243 Build targets in project: 115 00:02:45.243 00:02:45.243 DPDK 24.03.0 00:02:45.243 00:02:45.243 User defined options 00:02:45.243 buildtype : debug 00:02:45.243 default_library : shared 00:02:45.243 libdir : lib 00:02:45.243 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:02:45.243 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:02:45.243 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:02:45.243 cpu_instruction_set: native 00:02:45.243 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:45.243 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:45.243 enable_docs : false 00:02:45.243 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:02:45.243 enable_kmods : false 00:02:45.243 max_lcores : 128 00:02:45.243 tests : false 00:02:45.243 00:02:45.243 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:45.243 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:02:45.243 [1/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:45.243 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:45.243 [3/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:45.243 [4/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:45.243 [5/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:45.243 [6/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:45.243 [7/378] Linking static target lib/librte_kvargs.a 00:02:45.243 [8/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:45.243 [9/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:45.243 [10/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:45.243 [11/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:45.243 [12/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:45.243 [13/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:45.243 [14/378] Linking static target lib/librte_log.a 00:02:45.243 [15/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:45.243 [16/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:45.243 [17/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.243 [18/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:45.243 [19/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:45.243 [20/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:45.243 [21/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:45.243 [22/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:45.243 [23/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.243 [24/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:45.243 [25/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:45.243 [26/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:45.243 [27/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:45.243 [28/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:45.243 [29/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:45.243 [30/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:45.243 [31/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:45.243 [32/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:45.243 [33/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:45.243 [34/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:45.243 [35/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:45.243 [36/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:45.244 [37/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:45.244 [38/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:45.244 [39/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:45.244 [40/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:45.244 [41/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:45.244 [42/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:45.244 [43/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:45.244 [44/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:45.244 [45/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:45.244 [46/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:45.244 [47/378] Linking target lib/librte_log.so.24.1 00:02:45.244 [48/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:45.244 [49/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:45.244 [50/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:45.244 [51/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:45.244 [52/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:45.244 [53/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:45.244 [54/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:45.244 [55/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:45.244 [56/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:45.244 [57/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:45.244 [58/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:45.244 [59/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:45.244 [60/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:45.244 [61/378] Linking static target lib/librte_telemetry.a 00:02:45.244 [62/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:45.244 [63/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:45.244 [64/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:45.244 [65/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:45.244 [66/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:45.244 [67/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:45.244 [68/378] Linking target lib/librte_kvargs.so.24.1 00:02:45.244 [69/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:45.244 [70/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:45.244 [71/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:45.244 [72/378] Linking static target lib/librte_pci.a 00:02:45.244 [73/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:45.244 [74/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:45.244 [75/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:45.244 [76/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:45.244 [77/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:45.244 [78/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:45.528 [79/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:45.528 [80/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:45.528 [81/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:45.528 [82/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:45.528 [83/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.528 [84/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.528 [85/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:45.528 [86/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:45.528 [87/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:45.528 [88/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:45.528 [89/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:45.528 [90/378] Linking target lib/librte_telemetry.so.24.1 00:02:45.528 [91/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:45.528 [92/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:45.528 [93/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:45.528 [94/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:45.528 [95/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:45.528 [96/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:45.528 [97/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:45.528 [98/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:45.528 [99/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:45.528 [100/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.528 [101/378] Linking static target lib/librte_meter.a 00:02:45.528 [102/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:45.528 [103/378] Linking static target lib/librte_ring.a 00:02:45.798 [104/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:45.798 [105/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:45.798 [106/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:45.798 [107/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:45.798 [108/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:45.798 [109/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:45.798 [110/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:45.798 [111/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:45.798 [112/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.798 [113/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:45.798 [114/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:45.798 [115/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:45.798 [116/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:45.798 [117/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:45.798 [118/378] Linking static target lib/librte_eal.a 00:02:45.798 [119/378] Linking static target lib/librte_rcu.a 00:02:45.798 [120/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:45.798 [121/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:45.798 [122/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:45.798 [123/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:45.798 [124/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:45.798 [125/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.798 [126/378] Linking static target lib/librte_mempool.a 00:02:45.798 [127/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:45.798 [128/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:45.798 [129/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:46.060 [130/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:46.060 [131/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:46.060 [132/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:46.060 [133/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:46.060 [134/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:46.060 [135/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.060 [136/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:46.060 [137/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.060 [138/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:02:46.060 [139/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:46.320 [140/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:46.320 [141/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:46.320 [142/378] Linking static target lib/librte_net.a 00:02:46.320 [143/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:46.320 [144/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:46.320 [145/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:46.320 [146/378] Linking static target lib/librte_cmdline.a 00:02:46.320 [147/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.320 [148/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:46.583 [149/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:46.583 [150/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:46.583 [151/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:46.583 [152/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:46.583 [153/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:46.583 [154/378] Linking static target lib/librte_timer.a 00:02:46.583 [155/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:46.583 [156/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:46.583 [157/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:46.583 [158/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.583 [159/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:46.583 [160/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:46.846 [161/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:46.846 [162/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.846 [163/378] Linking static target lib/librte_dmadev.a 00:02:46.846 [164/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:46.846 [165/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:46.846 [166/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:46.846 [167/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:46.846 [168/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:47.113 [169/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.113 [170/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:02:47.113 [171/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:47.113 [172/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:47.113 [173/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:02:47.113 [174/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:47.113 [175/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:02:47.113 [176/378] Linking static target lib/librte_power.a 00:02:47.113 [177/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:47.113 [178/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:02:47.113 [179/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:47.113 [180/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:47.113 [181/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.113 [182/378] Linking static target lib/librte_compressdev.a 00:02:47.113 [183/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:47.113 [184/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:47.113 [185/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.113 [186/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:47.113 [187/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.113 [188/378] Linking static target lib/librte_hash.a 00:02:47.377 [189/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:47.377 [190/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:02:47.377 [191/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:47.377 [192/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:47.377 [193/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:47.377 [194/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:47.377 [195/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:47.377 [196/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:02:47.377 [197/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:02:47.377 [198/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:47.377 [199/378] Linking static target drivers/librte_bus_auxiliary.a 00:02:47.377 [200/378] Linking static target lib/librte_reorder.a 00:02:47.377 [201/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:47.639 [202/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.639 [203/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:02:47.639 [204/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:47.639 [205/378] Linking static target lib/librte_mbuf.a 00:02:47.639 [206/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.639 [207/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:47.639 [208/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:47.639 [209/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.639 [210/378] Linking static target drivers/librte_bus_vdev.a 00:02:47.639 [211/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:47.639 [212/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:02:47.900 [213/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:02:47.900 [214/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:47.900 [215/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.900 [216/378] Linking static target lib/librte_security.a 00:02:47.900 [217/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.900 [218/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.900 [219/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:02:47.900 [220/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.900 [221/378] Linking static target drivers/librte_bus_pci.a 00:02:47.900 [222/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:02:47.900 [223/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:02:47.900 [224/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:02:47.900 [225/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:02:47.900 [226/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:02:47.900 [227/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.900 [228/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.900 [229/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.900 [230/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:02:47.900 [231/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:02:48.162 [232/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:02:48.162 [233/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:02:48.162 [234/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:02:48.162 [235/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:48.162 [236/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:02:48.162 [237/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:02:48.162 [238/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:02:48.162 [239/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:02:48.162 [240/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:02:48.162 [241/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.162 [242/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:02:48.162 [243/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:02:48.162 [244/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:02:48.162 [245/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.162 [246/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:02:48.162 [247/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:02:48.420 [248/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:02:48.420 [249/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:02:48.420 [250/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.420 [251/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.420 [252/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.678 [253/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:02:48.678 [254/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:02:48.678 [255/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:48.678 [256/378] Linking static target lib/librte_ethdev.a 00:02:48.678 [257/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:02:48.678 [258/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.678 [259/378] Linking static target lib/librte_cryptodev.a 00:02:48.678 [260/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:02:48.678 [261/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:02:48.936 [262/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:02:48.936 [263/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:48.936 [264/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:48.936 [265/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:02:48.936 [266/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:02:48.936 [267/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:02:48.936 [268/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:02:48.936 [269/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:02:48.936 [270/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:02:49.195 [271/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:02:49.195 [272/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:49.195 [273/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.195 [274/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.195 [275/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:02:49.195 [276/378] Linking static target drivers/librte_mempool_ring.a 00:02:49.195 [277/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:02:49.195 [278/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:02:49.195 [279/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:02:49.195 [280/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:02:49.195 [281/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:02:49.195 [282/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:02:49.195 [283/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:02:49.195 [284/378] Linking static target drivers/librte_common_mlx5.a 00:02:49.453 [285/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:02:49.453 [286/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:02:49.453 [287/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:02:49.453 [288/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:02:49.453 [289/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:02:49.453 [290/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:02:49.453 [291/378] Linking static target drivers/librte_compress_mlx5.a 00:02:49.453 [292/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:02:49.453 [293/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:02:49.453 [294/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:02:49.453 [295/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:02:49.453 [296/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:02:49.453 [297/378] Linking static target drivers/librte_compress_isal.a 00:02:49.711 [298/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:02:49.711 [299/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:02:49.711 [300/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:02:49.711 [301/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.711 [302/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:02:49.711 [303/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:02:49.711 [304/378] Linking static target drivers/librte_crypto_mlx5.a 00:02:49.711 [305/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:02:49.969 [306/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:02:49.969 [307/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:02:49.969 [308/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:02:49.969 [309/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:02:50.227 [310/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:02:50.227 [311/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:02:50.227 [312/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:02:50.227 [313/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:02:50.227 [314/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:02:50.486 [315/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:02:51.052 [316/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:02:51.052 [317/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:51.310 [318/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:02:51.876 [319/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.808 [320/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.808 [321/378] Linking target lib/librte_eal.so.24.1 00:02:52.808 [322/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.065 [323/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:53.065 [324/378] Linking target lib/librte_timer.so.24.1 00:02:53.065 [325/378] Linking target lib/librte_ring.so.24.1 00:02:53.065 [326/378] Linking target lib/librte_meter.so.24.1 00:02:53.065 [327/378] Linking target lib/librte_pci.so.24.1 00:02:53.065 [328/378] Linking target lib/librte_dmadev.so.24.1 00:02:53.065 [329/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:02:53.065 [330/378] Linking target drivers/librte_bus_vdev.so.24.1 00:02:53.066 [331/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:02:53.066 [332/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:53.066 [333/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:02:53.066 [334/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:53.066 [335/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:53.066 [336/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:53.066 [337/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:53.066 [338/378] Linking target lib/librte_rcu.so.24.1 00:02:53.066 [339/378] Linking target drivers/librte_bus_pci.so.24.1 00:02:53.066 [340/378] Linking target lib/librte_mempool.so.24.1 00:02:53.324 [341/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:53.324 [342/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:02:53.324 [343/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:53.324 [344/378] Linking target drivers/librte_mempool_ring.so.24.1 00:02:53.324 [345/378] Linking target lib/librte_mbuf.so.24.1 00:02:53.581 [346/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:53.581 [347/378] Linking target lib/librte_reorder.so.24.1 00:02:53.581 [348/378] Linking target lib/librte_compressdev.so.24.1 00:02:53.581 [349/378] Linking target lib/librte_net.so.24.1 00:02:53.581 [350/378] Linking target lib/librte_cryptodev.so.24.1 00:02:53.581 [351/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:02:53.581 [352/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:53.581 [353/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:53.581 [354/378] Linking target lib/librte_security.so.24.1 00:02:53.581 [355/378] Linking target lib/librte_hash.so.24.1 00:02:53.581 [356/378] Linking target lib/librte_cmdline.so.24.1 00:02:53.581 [357/378] Linking target drivers/librte_compress_isal.so.24.1 00:02:53.839 [358/378] Linking target lib/librte_ethdev.so.24.1 00:02:53.839 [359/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:02:53.839 [360/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:53.839 [361/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:53.839 [362/378] Linking target drivers/librte_common_mlx5.so.24.1 00:02:53.839 [363/378] Linking target lib/librte_power.so.24.1 00:02:54.097 [364/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:02:54.097 [365/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:02:54.097 [366/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:02:54.097 [367/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:02:56.624 [368/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:56.624 [369/378] Linking static target lib/librte_vhost.a 00:02:56.882 [370/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:02:56.882 [371/378] Linking static target drivers/libtmp_rte_common_qat.a 00:02:57.140 [372/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:02:57.140 [373/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:57.140 [374/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:57.140 [375/378] Linking static target drivers/librte_common_qat.a 00:02:57.705 [376/378] Linking target drivers/librte_common_qat.so.24.1 00:02:57.705 [377/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.705 [378/378] Linking target lib/librte_vhost.so.24.1 00:02:57.705 INFO: autodetecting backend as ninja 00:02:57.705 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:58.638 CC lib/log/log.o 00:02:58.638 CC lib/log/log_flags.o 00:02:58.638 CC lib/log/log_deprecated.o 00:02:58.638 CC lib/ut_mock/mock.o 00:02:58.638 CC lib/ut/ut.o 00:02:58.896 LIB libspdk_log.a 00:02:58.896 LIB libspdk_ut.a 00:02:58.896 LIB libspdk_ut_mock.a 00:02:58.896 SO libspdk_ut.so.2.0 00:02:58.896 SO libspdk_ut_mock.so.6.0 00:02:58.896 SO libspdk_log.so.7.0 00:02:58.896 SYMLINK libspdk_ut.so 00:02:58.896 SYMLINK libspdk_ut_mock.so 00:02:58.896 SYMLINK libspdk_log.so 00:02:59.155 CC lib/ioat/ioat.o 00:02:59.155 CC lib/dma/dma.o 00:02:59.155 CC lib/util/base64.o 00:02:59.155 CXX lib/trace_parser/trace.o 00:02:59.155 CC lib/util/bit_array.o 00:02:59.155 CC lib/util/cpuset.o 00:02:59.155 CC lib/util/crc16.o 00:02:59.155 CC lib/util/crc32.o 00:02:59.155 CC lib/util/crc32c.o 00:02:59.155 CC lib/util/crc32_ieee.o 00:02:59.155 CC lib/util/crc64.o 00:02:59.155 CC lib/util/dif.o 00:02:59.155 CC lib/util/fd.o 00:02:59.155 CC lib/util/fd_group.o 00:02:59.155 CC lib/util/file.o 00:02:59.155 CC lib/util/hexlify.o 00:02:59.155 CC lib/util/iov.o 00:02:59.155 CC lib/util/math.o 00:02:59.155 CC lib/util/net.o 00:02:59.155 CC lib/util/pipe.o 00:02:59.155 CC lib/util/strerror_tls.o 00:02:59.155 CC lib/util/string.o 00:02:59.155 CC lib/util/uuid.o 00:02:59.155 CC lib/util/zipf.o 00:02:59.155 CC lib/util/xor.o 00:02:59.414 CC lib/vfio_user/host/vfio_user_pci.o 00:02:59.414 CC lib/vfio_user/host/vfio_user.o 00:02:59.414 LIB libspdk_dma.a 00:02:59.414 SO libspdk_dma.so.4.0 00:02:59.414 SYMLINK libspdk_dma.so 00:02:59.414 LIB libspdk_ioat.a 00:02:59.414 SO libspdk_ioat.so.7.0 00:02:59.414 SYMLINK libspdk_ioat.so 00:02:59.672 LIB libspdk_vfio_user.a 00:02:59.672 SO libspdk_vfio_user.so.5.0 00:02:59.672 SYMLINK libspdk_vfio_user.so 00:02:59.672 LIB libspdk_util.a 00:02:59.672 SO libspdk_util.so.10.0 00:02:59.929 SYMLINK libspdk_util.so 00:03:00.186 CC lib/vmd/vmd.o 00:03:00.186 CC lib/json/json_parse.o 00:03:00.186 CC lib/idxd/idxd.o 00:03:00.186 CC lib/vmd/led.o 00:03:00.186 CC lib/rdma_provider/common.o 00:03:00.186 CC lib/json/json_util.o 00:03:00.186 CC lib/rdma_utils/rdma_utils.o 00:03:00.186 CC lib/conf/conf.o 00:03:00.186 CC lib/reduce/reduce.o 00:03:00.186 CC lib/idxd/idxd_user.o 00:03:00.186 CC lib/env_dpdk/env.o 00:03:00.186 CC lib/json/json_write.o 00:03:00.186 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:00.186 CC lib/idxd/idxd_kernel.o 00:03:00.186 CC lib/env_dpdk/memory.o 00:03:00.186 CC lib/env_dpdk/pci.o 00:03:00.186 CC lib/env_dpdk/init.o 00:03:00.186 CC lib/env_dpdk/threads.o 00:03:00.186 CC lib/env_dpdk/pci_ioat.o 00:03:00.186 CC lib/env_dpdk/pci_virtio.o 00:03:00.186 CC lib/env_dpdk/pci_vmd.o 00:03:00.186 CC lib/env_dpdk/pci_event.o 00:03:00.186 CC lib/env_dpdk/pci_idxd.o 00:03:00.186 CC lib/env_dpdk/sigbus_handler.o 00:03:00.186 CC lib/env_dpdk/pci_dpdk.o 00:03:00.186 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:00.186 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:00.186 LIB libspdk_trace_parser.a 00:03:00.186 SO libspdk_trace_parser.so.5.0 00:03:00.444 SYMLINK libspdk_trace_parser.so 00:03:00.444 LIB libspdk_rdma_provider.a 00:03:00.444 SO libspdk_rdma_provider.so.6.0 00:03:00.444 SYMLINK libspdk_rdma_provider.so 00:03:00.444 LIB libspdk_rdma_utils.a 00:03:00.444 SO libspdk_rdma_utils.so.1.0 00:03:00.444 LIB libspdk_json.a 00:03:00.444 LIB libspdk_conf.a 00:03:00.444 SO libspdk_conf.so.6.0 00:03:00.444 SO libspdk_json.so.6.0 00:03:00.444 SYMLINK libspdk_rdma_utils.so 00:03:00.444 SYMLINK libspdk_conf.so 00:03:00.444 SYMLINK libspdk_json.so 00:03:00.701 CC lib/jsonrpc/jsonrpc_server.o 00:03:00.701 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:00.701 CC lib/jsonrpc/jsonrpc_client.o 00:03:00.701 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:00.701 LIB libspdk_idxd.a 00:03:00.701 SO libspdk_idxd.so.12.0 00:03:00.701 SYMLINK libspdk_idxd.so 00:03:00.701 LIB libspdk_reduce.a 00:03:00.959 SO libspdk_reduce.so.6.1 00:03:00.959 LIB libspdk_vmd.a 00:03:00.959 SYMLINK libspdk_reduce.so 00:03:00.959 SO libspdk_vmd.so.6.0 00:03:00.959 SYMLINK libspdk_vmd.so 00:03:00.959 LIB libspdk_jsonrpc.a 00:03:00.959 SO libspdk_jsonrpc.so.6.0 00:03:00.959 SYMLINK libspdk_jsonrpc.so 00:03:01.217 CC lib/rpc/rpc.o 00:03:01.475 LIB libspdk_rpc.a 00:03:01.475 SO libspdk_rpc.so.6.0 00:03:01.475 SYMLINK libspdk_rpc.so 00:03:01.733 CC lib/keyring/keyring.o 00:03:01.733 CC lib/keyring/keyring_rpc.o 00:03:01.733 CC lib/trace/trace.o 00:03:01.733 CC lib/notify/notify.o 00:03:01.733 CC lib/trace/trace_flags.o 00:03:01.733 CC lib/notify/notify_rpc.o 00:03:01.733 CC lib/trace/trace_rpc.o 00:03:01.733 LIB libspdk_notify.a 00:03:01.992 SO libspdk_notify.so.6.0 00:03:01.992 LIB libspdk_keyring.a 00:03:01.992 SYMLINK libspdk_notify.so 00:03:01.992 LIB libspdk_trace.a 00:03:01.992 SO libspdk_keyring.so.1.0 00:03:01.992 SO libspdk_trace.so.10.0 00:03:01.992 SYMLINK libspdk_keyring.so 00:03:01.992 SYMLINK libspdk_trace.so 00:03:01.992 LIB libspdk_env_dpdk.a 00:03:02.250 SO libspdk_env_dpdk.so.15.0 00:03:02.250 CC lib/thread/thread.o 00:03:02.250 CC lib/thread/iobuf.o 00:03:02.250 CC lib/sock/sock.o 00:03:02.250 CC lib/sock/sock_rpc.o 00:03:02.250 SYMLINK libspdk_env_dpdk.so 00:03:02.508 LIB libspdk_sock.a 00:03:02.508 SO libspdk_sock.so.10.0 00:03:02.769 SYMLINK libspdk_sock.so 00:03:02.769 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:02.769 CC lib/nvme/nvme_ctrlr.o 00:03:02.769 CC lib/nvme/nvme_fabric.o 00:03:02.769 CC lib/nvme/nvme_ns_cmd.o 00:03:02.769 CC lib/nvme/nvme_ns.o 00:03:02.769 CC lib/nvme/nvme_pcie_common.o 00:03:02.769 CC lib/nvme/nvme_pcie.o 00:03:02.769 CC lib/nvme/nvme_qpair.o 00:03:02.769 CC lib/nvme/nvme.o 00:03:02.769 CC lib/nvme/nvme_quirks.o 00:03:02.769 CC lib/nvme/nvme_transport.o 00:03:02.769 CC lib/nvme/nvme_discovery.o 00:03:02.769 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:02.769 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:02.769 CC lib/nvme/nvme_tcp.o 00:03:02.769 CC lib/nvme/nvme_opal.o 00:03:02.769 CC lib/nvme/nvme_io_msg.o 00:03:02.769 CC lib/nvme/nvme_poll_group.o 00:03:02.769 CC lib/nvme/nvme_zns.o 00:03:02.769 CC lib/nvme/nvme_stubs.o 00:03:02.769 CC lib/nvme/nvme_auth.o 00:03:02.769 CC lib/nvme/nvme_cuse.o 00:03:02.769 CC lib/nvme/nvme_rdma.o 00:03:03.745 LIB libspdk_thread.a 00:03:03.745 SO libspdk_thread.so.10.1 00:03:03.745 SYMLINK libspdk_thread.so 00:03:04.004 CC lib/virtio/virtio.o 00:03:04.004 CC lib/init/json_config.o 00:03:04.004 CC lib/virtio/virtio_vhost_user.o 00:03:04.004 CC lib/init/subsystem.o 00:03:04.004 CC lib/virtio/virtio_vfio_user.o 00:03:04.004 CC lib/virtio/virtio_pci.o 00:03:04.004 CC lib/init/subsystem_rpc.o 00:03:04.004 CC lib/blob/blobstore.o 00:03:04.004 CC lib/init/rpc.o 00:03:04.004 CC lib/blob/request.o 00:03:04.004 CC lib/blob/zeroes.o 00:03:04.004 CC lib/blob/blob_bs_dev.o 00:03:04.004 CC lib/accel/accel.o 00:03:04.004 CC lib/accel/accel_rpc.o 00:03:04.004 CC lib/accel/accel_sw.o 00:03:04.263 LIB libspdk_init.a 00:03:04.263 SO libspdk_init.so.5.0 00:03:04.263 LIB libspdk_virtio.a 00:03:04.263 SYMLINK libspdk_init.so 00:03:04.263 SO libspdk_virtio.so.7.0 00:03:04.523 SYMLINK libspdk_virtio.so 00:03:04.523 CC lib/event/app.o 00:03:04.523 CC lib/event/reactor.o 00:03:04.523 CC lib/event/log_rpc.o 00:03:04.523 CC lib/event/app_rpc.o 00:03:04.523 CC lib/event/scheduler_static.o 00:03:05.089 LIB libspdk_event.a 00:03:05.089 SO libspdk_event.so.14.0 00:03:05.089 SYMLINK libspdk_event.so 00:03:05.089 LIB libspdk_accel.a 00:03:05.089 SO libspdk_accel.so.16.0 00:03:05.089 SYMLINK libspdk_accel.so 00:03:05.089 LIB libspdk_nvme.a 00:03:05.347 SO libspdk_nvme.so.13.1 00:03:05.347 CC lib/bdev/bdev.o 00:03:05.347 CC lib/bdev/bdev_rpc.o 00:03:05.347 CC lib/bdev/bdev_zone.o 00:03:05.347 CC lib/bdev/part.o 00:03:05.347 CC lib/bdev/scsi_nvme.o 00:03:05.605 SYMLINK libspdk_nvme.so 00:03:06.982 LIB libspdk_blob.a 00:03:06.982 SO libspdk_blob.so.11.0 00:03:07.242 SYMLINK libspdk_blob.so 00:03:07.242 CC lib/blobfs/blobfs.o 00:03:07.242 CC lib/blobfs/tree.o 00:03:07.242 CC lib/lvol/lvol.o 00:03:07.807 LIB libspdk_bdev.a 00:03:08.065 SO libspdk_bdev.so.16.0 00:03:08.065 SYMLINK libspdk_bdev.so 00:03:08.065 LIB libspdk_blobfs.a 00:03:08.065 SO libspdk_blobfs.so.10.0 00:03:08.065 SYMLINK libspdk_blobfs.so 00:03:08.065 LIB libspdk_lvol.a 00:03:08.330 CC lib/nbd/nbd.o 00:03:08.330 CC lib/ublk/ublk.o 00:03:08.330 CC lib/nbd/nbd_rpc.o 00:03:08.330 CC lib/ublk/ublk_rpc.o 00:03:08.330 CC lib/ftl/ftl_core.o 00:03:08.330 CC lib/ftl/ftl_init.o 00:03:08.330 CC lib/nvmf/ctrlr.o 00:03:08.330 CC lib/scsi/dev.o 00:03:08.330 CC lib/ftl/ftl_layout.o 00:03:08.330 CC lib/nvmf/ctrlr_discovery.o 00:03:08.330 CC lib/ftl/ftl_debug.o 00:03:08.330 CC lib/scsi/lun.o 00:03:08.330 CC lib/nvmf/ctrlr_bdev.o 00:03:08.330 CC lib/ftl/ftl_io.o 00:03:08.330 CC lib/scsi/port.o 00:03:08.330 CC lib/nvmf/subsystem.o 00:03:08.330 CC lib/ftl/ftl_sb.o 00:03:08.330 CC lib/scsi/scsi.o 00:03:08.330 CC lib/ftl/ftl_l2p.o 00:03:08.330 CC lib/nvmf/nvmf.o 00:03:08.330 CC lib/scsi/scsi_bdev.o 00:03:08.330 CC lib/nvmf/nvmf_rpc.o 00:03:08.330 CC lib/ftl/ftl_l2p_flat.o 00:03:08.330 CC lib/nvmf/transport.o 00:03:08.330 CC lib/scsi/scsi_pr.o 00:03:08.330 CC lib/ftl/ftl_nv_cache.o 00:03:08.330 CC lib/ftl/ftl_band.o 00:03:08.330 CC lib/scsi/scsi_rpc.o 00:03:08.330 CC lib/nvmf/tcp.o 00:03:08.330 CC lib/scsi/task.o 00:03:08.330 CC lib/ftl/ftl_band_ops.o 00:03:08.330 CC lib/nvmf/stubs.o 00:03:08.330 CC lib/ftl/ftl_writer.o 00:03:08.330 SO libspdk_lvol.so.10.0 00:03:08.330 CC lib/nvmf/mdns_server.o 00:03:08.330 CC lib/nvmf/rdma.o 00:03:08.330 CC lib/ftl/ftl_rq.o 00:03:08.330 CC lib/nvmf/auth.o 00:03:08.330 CC lib/ftl/ftl_reloc.o 00:03:08.330 CC lib/ftl/ftl_l2p_cache.o 00:03:08.330 CC lib/ftl/ftl_p2l.o 00:03:08.330 CC lib/ftl/mngt/ftl_mngt.o 00:03:08.330 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:08.330 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:08.330 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.330 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.330 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:08.330 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:08.330 SYMLINK libspdk_lvol.so 00:03:08.330 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:08.590 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:08.590 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:08.590 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:08.590 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:08.590 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:08.590 CC lib/ftl/utils/ftl_conf.o 00:03:08.590 CC lib/ftl/utils/ftl_md.o 00:03:08.590 CC lib/ftl/utils/ftl_mempool.o 00:03:08.590 CC lib/ftl/utils/ftl_bitmap.o 00:03:08.590 CC lib/ftl/utils/ftl_property.o 00:03:08.590 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:08.590 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:08.590 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:08.590 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:08.590 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:08.852 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:08.852 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:08.852 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:08.852 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:08.852 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:08.852 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:08.852 CC lib/ftl/base/ftl_base_dev.o 00:03:08.852 CC lib/ftl/base/ftl_base_bdev.o 00:03:08.852 CC lib/ftl/ftl_trace.o 00:03:09.110 LIB libspdk_nbd.a 00:03:09.110 SO libspdk_nbd.so.7.0 00:03:09.110 LIB libspdk_scsi.a 00:03:09.110 SYMLINK libspdk_nbd.so 00:03:09.110 SO libspdk_scsi.so.9.0 00:03:09.110 SYMLINK libspdk_scsi.so 00:03:09.368 LIB libspdk_ublk.a 00:03:09.368 SO libspdk_ublk.so.3.0 00:03:09.368 CC lib/iscsi/conn.o 00:03:09.368 CC lib/vhost/vhost.o 00:03:09.368 CC lib/iscsi/init_grp.o 00:03:09.368 CC lib/vhost/vhost_rpc.o 00:03:09.368 CC lib/iscsi/iscsi.o 00:03:09.368 CC lib/vhost/vhost_scsi.o 00:03:09.368 CC lib/vhost/vhost_blk.o 00:03:09.368 CC lib/iscsi/md5.o 00:03:09.368 CC lib/vhost/rte_vhost_user.o 00:03:09.368 CC lib/iscsi/param.o 00:03:09.368 CC lib/iscsi/portal_grp.o 00:03:09.368 CC lib/iscsi/tgt_node.o 00:03:09.368 CC lib/iscsi/iscsi_subsystem.o 00:03:09.368 CC lib/iscsi/iscsi_rpc.o 00:03:09.368 CC lib/iscsi/task.o 00:03:09.368 SYMLINK libspdk_ublk.so 00:03:09.627 LIB libspdk_ftl.a 00:03:09.889 SO libspdk_ftl.so.9.0 00:03:10.146 SYMLINK libspdk_ftl.so 00:03:10.711 LIB libspdk_vhost.a 00:03:10.711 SO libspdk_vhost.so.8.0 00:03:10.711 LIB libspdk_nvmf.a 00:03:10.711 SYMLINK libspdk_vhost.so 00:03:10.711 SO libspdk_nvmf.so.19.0 00:03:10.970 LIB libspdk_iscsi.a 00:03:10.970 SO libspdk_iscsi.so.8.0 00:03:10.970 SYMLINK libspdk_nvmf.so 00:03:10.970 SYMLINK libspdk_iscsi.so 00:03:11.229 CC module/env_dpdk/env_dpdk_rpc.o 00:03:11.487 CC module/keyring/file/keyring.o 00:03:11.487 CC module/keyring/file/keyring_rpc.o 00:03:11.487 CC module/sock/posix/posix.o 00:03:11.487 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:11.487 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:03:11.487 CC module/keyring/linux/keyring.o 00:03:11.487 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:03:11.487 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:03:11.487 CC module/accel/iaa/accel_iaa.o 00:03:11.487 CC module/blob/bdev/blob_bdev.o 00:03:11.487 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:03:11.487 CC module/keyring/linux/keyring_rpc.o 00:03:11.487 CC module/scheduler/gscheduler/gscheduler.o 00:03:11.487 CC module/accel/iaa/accel_iaa_rpc.o 00:03:11.487 CC module/accel/error/accel_error.o 00:03:11.487 CC module/accel/error/accel_error_rpc.o 00:03:11.487 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:11.487 CC module/accel/dsa/accel_dsa.o 00:03:11.487 CC module/accel/ioat/accel_ioat.o 00:03:11.487 CC module/accel/dsa/accel_dsa_rpc.o 00:03:11.487 CC module/accel/ioat/accel_ioat_rpc.o 00:03:11.487 LIB libspdk_env_dpdk_rpc.a 00:03:11.487 SO libspdk_env_dpdk_rpc.so.6.0 00:03:11.487 SYMLINK libspdk_env_dpdk_rpc.so 00:03:11.487 LIB libspdk_keyring_file.a 00:03:11.487 LIB libspdk_keyring_linux.a 00:03:11.487 LIB libspdk_scheduler_gscheduler.a 00:03:11.487 LIB libspdk_scheduler_dpdk_governor.a 00:03:11.487 SO libspdk_keyring_file.so.1.0 00:03:11.487 SO libspdk_keyring_linux.so.1.0 00:03:11.487 LIB libspdk_accel_error.a 00:03:11.487 SO libspdk_scheduler_gscheduler.so.4.0 00:03:11.487 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:11.745 LIB libspdk_accel_ioat.a 00:03:11.746 LIB libspdk_scheduler_dynamic.a 00:03:11.746 SO libspdk_accel_error.so.2.0 00:03:11.746 LIB libspdk_accel_iaa.a 00:03:11.746 SO libspdk_accel_ioat.so.6.0 00:03:11.746 SYMLINK libspdk_keyring_file.so 00:03:11.746 SYMLINK libspdk_keyring_linux.so 00:03:11.746 SO libspdk_scheduler_dynamic.so.4.0 00:03:11.746 SYMLINK libspdk_scheduler_gscheduler.so 00:03:11.746 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:11.746 SO libspdk_accel_iaa.so.3.0 00:03:11.746 SYMLINK libspdk_accel_error.so 00:03:11.746 LIB libspdk_accel_dsa.a 00:03:11.746 LIB libspdk_blob_bdev.a 00:03:11.746 SYMLINK libspdk_accel_ioat.so 00:03:11.746 SYMLINK libspdk_scheduler_dynamic.so 00:03:11.746 SO libspdk_accel_dsa.so.5.0 00:03:11.746 SO libspdk_blob_bdev.so.11.0 00:03:11.746 SYMLINK libspdk_accel_iaa.so 00:03:11.746 SYMLINK libspdk_blob_bdev.so 00:03:11.746 SYMLINK libspdk_accel_dsa.so 00:03:12.015 CC module/bdev/delay/vbdev_delay.o 00:03:12.015 CC module/bdev/nvme/bdev_nvme.o 00:03:12.015 CC module/blobfs/bdev/blobfs_bdev.o 00:03:12.015 CC module/bdev/malloc/bdev_malloc.o 00:03:12.015 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:12.015 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:12.015 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:12.015 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:12.015 CC module/bdev/nvme/nvme_rpc.o 00:03:12.015 CC module/bdev/error/vbdev_error.o 00:03:12.015 CC module/bdev/gpt/gpt.o 00:03:12.015 CC module/bdev/error/vbdev_error_rpc.o 00:03:12.015 CC module/bdev/ftl/bdev_ftl.o 00:03:12.015 CC module/bdev/nvme/bdev_mdns_client.o 00:03:12.015 CC module/bdev/nvme/vbdev_opal.o 00:03:12.015 CC module/bdev/gpt/vbdev_gpt.o 00:03:12.015 CC module/bdev/null/bdev_null.o 00:03:12.015 CC module/bdev/compress/vbdev_compress.o 00:03:12.015 CC module/bdev/raid/bdev_raid.o 00:03:12.015 CC module/bdev/null/bdev_null_rpc.o 00:03:12.015 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:12.015 CC module/bdev/compress/vbdev_compress_rpc.o 00:03:12.015 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:12.015 CC module/bdev/raid/bdev_raid_rpc.o 00:03:12.015 CC module/bdev/split/vbdev_split.o 00:03:12.015 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:12.015 CC module/bdev/passthru/vbdev_passthru.o 00:03:12.015 CC module/bdev/raid/bdev_raid_sb.o 00:03:12.015 CC module/bdev/split/vbdev_split_rpc.o 00:03:12.015 CC module/bdev/raid/raid0.o 00:03:12.015 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:12.015 CC module/bdev/raid/raid1.o 00:03:12.015 CC module/bdev/raid/concat.o 00:03:12.015 CC module/bdev/iscsi/bdev_iscsi.o 00:03:12.015 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:12.015 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:12.015 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:12.015 CC module/bdev/lvol/vbdev_lvol.o 00:03:12.015 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:12.015 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:12.015 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:12.015 CC module/bdev/crypto/vbdev_crypto.o 00:03:12.015 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:12.015 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:03:12.015 CC module/bdev/aio/bdev_aio.o 00:03:12.278 LIB libspdk_sock_posix.a 00:03:12.278 SO libspdk_sock_posix.so.6.0 00:03:12.278 CC module/bdev/aio/bdev_aio_rpc.o 00:03:12.535 LIB libspdk_blobfs_bdev.a 00:03:12.535 SO libspdk_blobfs_bdev.so.6.0 00:03:12.535 SYMLINK libspdk_sock_posix.so 00:03:12.535 LIB libspdk_bdev_split.a 00:03:12.535 LIB libspdk_bdev_error.a 00:03:12.535 SO libspdk_bdev_split.so.6.0 00:03:12.535 SO libspdk_bdev_error.so.6.0 00:03:12.535 LIB libspdk_bdev_gpt.a 00:03:12.535 SYMLINK libspdk_blobfs_bdev.so 00:03:12.535 LIB libspdk_bdev_null.a 00:03:12.535 LIB libspdk_bdev_passthru.a 00:03:12.535 SO libspdk_bdev_gpt.so.6.0 00:03:12.535 SYMLINK libspdk_bdev_split.so 00:03:12.535 LIB libspdk_bdev_ftl.a 00:03:12.535 SO libspdk_bdev_null.so.6.0 00:03:12.535 SO libspdk_bdev_passthru.so.6.0 00:03:12.535 LIB libspdk_bdev_aio.a 00:03:12.535 LIB libspdk_bdev_crypto.a 00:03:12.535 SO libspdk_bdev_ftl.so.6.0 00:03:12.535 SYMLINK libspdk_bdev_error.so 00:03:12.535 LIB libspdk_bdev_zone_block.a 00:03:12.535 LIB libspdk_bdev_compress.a 00:03:12.535 LIB libspdk_bdev_malloc.a 00:03:12.535 SO libspdk_bdev_crypto.so.6.0 00:03:12.535 SO libspdk_bdev_aio.so.6.0 00:03:12.535 SYMLINK libspdk_bdev_gpt.so 00:03:12.535 SYMLINK libspdk_bdev_null.so 00:03:12.535 LIB libspdk_bdev_delay.a 00:03:12.535 SO libspdk_bdev_zone_block.so.6.0 00:03:12.535 SO libspdk_bdev_compress.so.6.0 00:03:12.535 SO libspdk_bdev_malloc.so.6.0 00:03:12.792 LIB libspdk_bdev_iscsi.a 00:03:12.792 SYMLINK libspdk_bdev_passthru.so 00:03:12.792 SYMLINK libspdk_bdev_ftl.so 00:03:12.792 SO libspdk_bdev_delay.so.6.0 00:03:12.792 SO libspdk_bdev_iscsi.so.6.0 00:03:12.792 SYMLINK libspdk_bdev_aio.so 00:03:12.793 SYMLINK libspdk_bdev_crypto.so 00:03:12.793 SYMLINK libspdk_bdev_zone_block.so 00:03:12.793 SYMLINK libspdk_bdev_compress.so 00:03:12.793 SYMLINK libspdk_bdev_malloc.so 00:03:12.793 SYMLINK libspdk_bdev_delay.so 00:03:12.793 SYMLINK libspdk_bdev_iscsi.so 00:03:12.793 LIB libspdk_bdev_lvol.a 00:03:12.793 SO libspdk_bdev_lvol.so.6.0 00:03:12.793 LIB libspdk_bdev_virtio.a 00:03:12.793 SO libspdk_bdev_virtio.so.6.0 00:03:12.793 SYMLINK libspdk_bdev_lvol.so 00:03:12.793 SYMLINK libspdk_bdev_virtio.so 00:03:13.358 LIB libspdk_bdev_raid.a 00:03:13.358 SO libspdk_bdev_raid.so.6.0 00:03:13.358 LIB libspdk_accel_dpdk_compressdev.a 00:03:13.358 SO libspdk_accel_dpdk_compressdev.so.3.0 00:03:13.358 SYMLINK libspdk_bdev_raid.so 00:03:13.358 SYMLINK libspdk_accel_dpdk_compressdev.so 00:03:14.295 LIB libspdk_accel_dpdk_cryptodev.a 00:03:14.295 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:03:14.295 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:03:14.554 LIB libspdk_bdev_nvme.a 00:03:14.554 SO libspdk_bdev_nvme.so.7.0 00:03:14.554 SYMLINK libspdk_bdev_nvme.so 00:03:15.122 CC module/event/subsystems/vmd/vmd.o 00:03:15.122 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.122 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:15.122 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.122 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.122 CC module/event/subsystems/sock/sock.o 00:03:15.122 CC module/event/subsystems/keyring/keyring.o 00:03:15.122 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:15.122 LIB libspdk_event_keyring.a 00:03:15.122 LIB libspdk_event_vhost_blk.a 00:03:15.122 LIB libspdk_event_scheduler.a 00:03:15.122 LIB libspdk_event_vmd.a 00:03:15.122 LIB libspdk_event_sock.a 00:03:15.122 SO libspdk_event_keyring.so.1.0 00:03:15.122 LIB libspdk_event_iobuf.a 00:03:15.122 SO libspdk_event_vhost_blk.so.3.0 00:03:15.122 SO libspdk_event_scheduler.so.4.0 00:03:15.122 SO libspdk_event_sock.so.5.0 00:03:15.122 SO libspdk_event_vmd.so.6.0 00:03:15.122 SO libspdk_event_iobuf.so.3.0 00:03:15.122 SYMLINK libspdk_event_keyring.so 00:03:15.122 SYMLINK libspdk_event_vhost_blk.so 00:03:15.122 SYMLINK libspdk_event_scheduler.so 00:03:15.122 SYMLINK libspdk_event_sock.so 00:03:15.122 SYMLINK libspdk_event_vmd.so 00:03:15.122 SYMLINK libspdk_event_iobuf.so 00:03:15.381 CC module/event/subsystems/accel/accel.o 00:03:15.639 LIB libspdk_event_accel.a 00:03:15.639 SO libspdk_event_accel.so.6.0 00:03:15.639 SYMLINK libspdk_event_accel.so 00:03:15.898 CC module/event/subsystems/bdev/bdev.o 00:03:15.898 LIB libspdk_event_bdev.a 00:03:15.898 SO libspdk_event_bdev.so.6.0 00:03:16.157 SYMLINK libspdk_event_bdev.so 00:03:16.157 CC module/event/subsystems/scsi/scsi.o 00:03:16.157 CC module/event/subsystems/ublk/ublk.o 00:03:16.157 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:16.157 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:16.157 CC module/event/subsystems/nbd/nbd.o 00:03:16.416 LIB libspdk_event_ublk.a 00:03:16.416 LIB libspdk_event_nbd.a 00:03:16.416 SO libspdk_event_nbd.so.6.0 00:03:16.416 SO libspdk_event_ublk.so.3.0 00:03:16.416 LIB libspdk_event_scsi.a 00:03:16.416 SO libspdk_event_scsi.so.6.0 00:03:16.416 SYMLINK libspdk_event_nbd.so 00:03:16.416 SYMLINK libspdk_event_ublk.so 00:03:16.416 SYMLINK libspdk_event_scsi.so 00:03:16.416 LIB libspdk_event_nvmf.a 00:03:16.416 SO libspdk_event_nvmf.so.6.0 00:03:16.416 SYMLINK libspdk_event_nvmf.so 00:03:16.675 CC module/event/subsystems/iscsi/iscsi.o 00:03:16.675 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:16.675 LIB libspdk_event_vhost_scsi.a 00:03:16.675 LIB libspdk_event_iscsi.a 00:03:16.675 SO libspdk_event_vhost_scsi.so.3.0 00:03:16.675 SO libspdk_event_iscsi.so.6.0 00:03:16.933 SYMLINK libspdk_event_vhost_scsi.so 00:03:16.933 SYMLINK libspdk_event_iscsi.so 00:03:16.933 SO libspdk.so.6.0 00:03:16.933 SYMLINK libspdk.so 00:03:17.197 CXX app/trace/trace.o 00:03:17.197 CC app/trace_record/trace_record.o 00:03:17.197 CC app/spdk_nvme_perf/perf.o 00:03:17.197 CC app/spdk_top/spdk_top.o 00:03:17.197 CC app/spdk_lspci/spdk_lspci.o 00:03:17.197 CC app/spdk_nvme_identify/identify.o 00:03:17.197 CC app/spdk_nvme_discover/discovery_aer.o 00:03:17.197 CC test/rpc_client/rpc_client_test.o 00:03:17.197 TEST_HEADER include/spdk/accel.h 00:03:17.197 TEST_HEADER include/spdk/accel_module.h 00:03:17.197 TEST_HEADER include/spdk/assert.h 00:03:17.197 TEST_HEADER include/spdk/barrier.h 00:03:17.197 TEST_HEADER include/spdk/base64.h 00:03:17.197 TEST_HEADER include/spdk/bdev.h 00:03:17.197 TEST_HEADER include/spdk/bdev_module.h 00:03:17.197 TEST_HEADER include/spdk/bdev_zone.h 00:03:17.197 TEST_HEADER include/spdk/bit_array.h 00:03:17.197 TEST_HEADER include/spdk/bit_pool.h 00:03:17.197 TEST_HEADER include/spdk/blob_bdev.h 00:03:17.197 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:17.197 TEST_HEADER include/spdk/blobfs.h 00:03:17.197 TEST_HEADER include/spdk/blob.h 00:03:17.197 TEST_HEADER include/spdk/conf.h 00:03:17.197 TEST_HEADER include/spdk/config.h 00:03:17.197 TEST_HEADER include/spdk/cpuset.h 00:03:17.197 TEST_HEADER include/spdk/crc16.h 00:03:17.197 TEST_HEADER include/spdk/crc32.h 00:03:17.197 TEST_HEADER include/spdk/crc64.h 00:03:17.197 TEST_HEADER include/spdk/dif.h 00:03:17.197 TEST_HEADER include/spdk/dma.h 00:03:17.197 TEST_HEADER include/spdk/endian.h 00:03:17.197 TEST_HEADER include/spdk/env_dpdk.h 00:03:17.197 TEST_HEADER include/spdk/env.h 00:03:17.197 TEST_HEADER include/spdk/event.h 00:03:17.197 TEST_HEADER include/spdk/fd_group.h 00:03:17.197 TEST_HEADER include/spdk/fd.h 00:03:17.197 TEST_HEADER include/spdk/ftl.h 00:03:17.197 TEST_HEADER include/spdk/file.h 00:03:17.197 TEST_HEADER include/spdk/gpt_spec.h 00:03:17.197 TEST_HEADER include/spdk/hexlify.h 00:03:17.197 TEST_HEADER include/spdk/histogram_data.h 00:03:17.197 TEST_HEADER include/spdk/idxd.h 00:03:17.197 TEST_HEADER include/spdk/init.h 00:03:17.197 TEST_HEADER include/spdk/idxd_spec.h 00:03:17.197 TEST_HEADER include/spdk/ioat.h 00:03:17.197 TEST_HEADER include/spdk/iscsi_spec.h 00:03:17.197 TEST_HEADER include/spdk/ioat_spec.h 00:03:17.197 TEST_HEADER include/spdk/json.h 00:03:17.197 TEST_HEADER include/spdk/jsonrpc.h 00:03:17.197 TEST_HEADER include/spdk/keyring.h 00:03:17.197 TEST_HEADER include/spdk/keyring_module.h 00:03:17.197 TEST_HEADER include/spdk/likely.h 00:03:17.197 TEST_HEADER include/spdk/log.h 00:03:17.197 TEST_HEADER include/spdk/lvol.h 00:03:17.197 TEST_HEADER include/spdk/memory.h 00:03:17.197 TEST_HEADER include/spdk/mmio.h 00:03:17.197 TEST_HEADER include/spdk/nbd.h 00:03:17.197 TEST_HEADER include/spdk/net.h 00:03:17.197 TEST_HEADER include/spdk/nvme.h 00:03:17.197 TEST_HEADER include/spdk/notify.h 00:03:17.197 TEST_HEADER include/spdk/nvme_intel.h 00:03:17.197 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:17.197 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:17.197 TEST_HEADER include/spdk/nvme_spec.h 00:03:17.197 TEST_HEADER include/spdk/nvme_zns.h 00:03:17.197 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:17.197 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:17.197 TEST_HEADER include/spdk/nvmf.h 00:03:17.197 TEST_HEADER include/spdk/nvmf_spec.h 00:03:17.197 TEST_HEADER include/spdk/nvmf_transport.h 00:03:17.197 TEST_HEADER include/spdk/opal.h 00:03:17.197 TEST_HEADER include/spdk/opal_spec.h 00:03:17.197 TEST_HEADER include/spdk/pci_ids.h 00:03:17.197 TEST_HEADER include/spdk/pipe.h 00:03:17.197 TEST_HEADER include/spdk/queue.h 00:03:17.197 TEST_HEADER include/spdk/reduce.h 00:03:17.197 TEST_HEADER include/spdk/rpc.h 00:03:17.197 TEST_HEADER include/spdk/scheduler.h 00:03:17.197 TEST_HEADER include/spdk/scsi.h 00:03:17.197 TEST_HEADER include/spdk/scsi_spec.h 00:03:17.197 TEST_HEADER include/spdk/sock.h 00:03:17.197 TEST_HEADER include/spdk/stdinc.h 00:03:17.198 TEST_HEADER include/spdk/string.h 00:03:17.198 TEST_HEADER include/spdk/thread.h 00:03:17.198 CC app/spdk_dd/spdk_dd.o 00:03:17.198 TEST_HEADER include/spdk/trace.h 00:03:17.198 TEST_HEADER include/spdk/trace_parser.h 00:03:17.198 TEST_HEADER include/spdk/tree.h 00:03:17.198 TEST_HEADER include/spdk/ublk.h 00:03:17.198 TEST_HEADER include/spdk/util.h 00:03:17.198 TEST_HEADER include/spdk/uuid.h 00:03:17.198 TEST_HEADER include/spdk/version.h 00:03:17.198 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:17.198 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:17.198 TEST_HEADER include/spdk/vhost.h 00:03:17.198 TEST_HEADER include/spdk/vmd.h 00:03:17.198 TEST_HEADER include/spdk/xor.h 00:03:17.198 TEST_HEADER include/spdk/zipf.h 00:03:17.198 CXX test/cpp_headers/accel.o 00:03:17.198 CXX test/cpp_headers/accel_module.o 00:03:17.198 CXX test/cpp_headers/assert.o 00:03:17.198 CXX test/cpp_headers/barrier.o 00:03:17.198 CXX test/cpp_headers/base64.o 00:03:17.198 CXX test/cpp_headers/bdev.o 00:03:17.198 CXX test/cpp_headers/bdev_zone.o 00:03:17.198 CXX test/cpp_headers/bdev_module.o 00:03:17.198 CXX test/cpp_headers/bit_array.o 00:03:17.198 CXX test/cpp_headers/bit_pool.o 00:03:17.198 CXX test/cpp_headers/blob_bdev.o 00:03:17.198 CXX test/cpp_headers/blobfs_bdev.o 00:03:17.198 CXX test/cpp_headers/blobfs.o 00:03:17.198 CXX test/cpp_headers/blob.o 00:03:17.198 CXX test/cpp_headers/conf.o 00:03:17.198 CXX test/cpp_headers/config.o 00:03:17.198 CXX test/cpp_headers/cpuset.o 00:03:17.198 CXX test/cpp_headers/crc16.o 00:03:17.198 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:17.198 CC app/nvmf_tgt/nvmf_main.o 00:03:17.198 CC app/iscsi_tgt/iscsi_tgt.o 00:03:17.198 CXX test/cpp_headers/crc32.o 00:03:17.198 CC app/spdk_tgt/spdk_tgt.o 00:03:17.198 CC test/thread/poller_perf/poller_perf.o 00:03:17.198 CC test/env/memory/memory_ut.o 00:03:17.198 CC test/app/histogram_perf/histogram_perf.o 00:03:17.198 CC examples/util/zipf/zipf.o 00:03:17.198 CC test/env/vtophys/vtophys.o 00:03:17.198 CC examples/ioat/perf/perf.o 00:03:17.198 CC test/app/jsoncat/jsoncat.o 00:03:17.198 CC test/env/pci/pci_ut.o 00:03:17.198 CC test/app/stub/stub.o 00:03:17.198 CC app/fio/nvme/fio_plugin.o 00:03:17.198 CC examples/ioat/verify/verify.o 00:03:17.198 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:17.459 CC test/dma/test_dma/test_dma.o 00:03:17.459 CC app/fio/bdev/fio_plugin.o 00:03:17.459 CC test/app/bdev_svc/bdev_svc.o 00:03:17.459 LINK spdk_lspci 00:03:17.459 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.459 LINK rpc_client_test 00:03:17.459 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:17.459 LINK spdk_nvme_discover 00:03:17.724 LINK jsoncat 00:03:17.724 LINK vtophys 00:03:17.724 LINK poller_perf 00:03:17.724 LINK histogram_perf 00:03:17.724 LINK spdk_trace_record 00:03:17.724 LINK interrupt_tgt 00:03:17.724 LINK zipf 00:03:17.724 CXX test/cpp_headers/crc64.o 00:03:17.724 CXX test/cpp_headers/dif.o 00:03:17.724 CXX test/cpp_headers/dma.o 00:03:17.724 CXX test/cpp_headers/endian.o 00:03:17.724 LINK nvmf_tgt 00:03:17.724 CXX test/cpp_headers/env_dpdk.o 00:03:17.724 CXX test/cpp_headers/env.o 00:03:17.724 CXX test/cpp_headers/event.o 00:03:17.724 LINK env_dpdk_post_init 00:03:17.724 LINK stub 00:03:17.724 CXX test/cpp_headers/fd_group.o 00:03:17.724 CXX test/cpp_headers/fd.o 00:03:17.724 CXX test/cpp_headers/file.o 00:03:17.724 CXX test/cpp_headers/ftl.o 00:03:17.724 CXX test/cpp_headers/gpt_spec.o 00:03:17.724 CXX test/cpp_headers/hexlify.o 00:03:17.724 LINK iscsi_tgt 00:03:17.724 CXX test/cpp_headers/histogram_data.o 00:03:17.724 CXX test/cpp_headers/idxd.o 00:03:17.725 LINK spdk_tgt 00:03:17.725 LINK verify 00:03:17.725 LINK bdev_svc 00:03:17.725 CXX test/cpp_headers/idxd_spec.o 00:03:17.725 LINK ioat_perf 00:03:17.725 CXX test/cpp_headers/init.o 00:03:17.725 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:17.987 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:17.987 CXX test/cpp_headers/ioat.o 00:03:17.987 LINK spdk_dd 00:03:17.987 CXX test/cpp_headers/ioat_spec.o 00:03:17.987 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:17.987 CXX test/cpp_headers/iscsi_spec.o 00:03:17.987 CXX test/cpp_headers/json.o 00:03:17.987 CXX test/cpp_headers/jsonrpc.o 00:03:17.987 CXX test/cpp_headers/keyring.o 00:03:17.987 CXX test/cpp_headers/keyring_module.o 00:03:17.987 LINK spdk_trace 00:03:17.987 CXX test/cpp_headers/likely.o 00:03:17.987 CXX test/cpp_headers/log.o 00:03:17.987 CXX test/cpp_headers/lvol.o 00:03:17.987 LINK pci_ut 00:03:17.987 CXX test/cpp_headers/memory.o 00:03:17.987 CXX test/cpp_headers/mmio.o 00:03:17.987 CXX test/cpp_headers/nbd.o 00:03:17.987 CXX test/cpp_headers/net.o 00:03:17.987 CXX test/cpp_headers/notify.o 00:03:17.987 CXX test/cpp_headers/nvme.o 00:03:18.250 CXX test/cpp_headers/nvme_intel.o 00:03:18.250 CXX test/cpp_headers/nvme_ocssd.o 00:03:18.250 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.250 CXX test/cpp_headers/nvme_spec.o 00:03:18.250 CXX test/cpp_headers/nvme_zns.o 00:03:18.250 CXX test/cpp_headers/nvmf_cmd.o 00:03:18.250 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:18.250 CXX test/cpp_headers/nvmf.o 00:03:18.250 LINK test_dma 00:03:18.250 CXX test/cpp_headers/nvmf_spec.o 00:03:18.250 CXX test/cpp_headers/nvmf_transport.o 00:03:18.250 CXX test/cpp_headers/opal.o 00:03:18.250 CXX test/cpp_headers/opal_spec.o 00:03:18.250 CXX test/cpp_headers/pci_ids.o 00:03:18.250 CXX test/cpp_headers/pipe.o 00:03:18.250 LINK nvme_fuzz 00:03:18.250 CC test/event/event_perf/event_perf.o 00:03:18.517 LINK spdk_nvme 00:03:18.517 CXX test/cpp_headers/queue.o 00:03:18.517 CC examples/vmd/lsvmd/lsvmd.o 00:03:18.517 LINK spdk_bdev 00:03:18.517 CC examples/sock/hello_world/hello_sock.o 00:03:18.517 CXX test/cpp_headers/reduce.o 00:03:18.517 CC examples/thread/thread/thread_ex.o 00:03:18.517 CC test/event/reactor/reactor.o 00:03:18.517 CXX test/cpp_headers/rpc.o 00:03:18.517 CC examples/idxd/perf/perf.o 00:03:18.517 CXX test/cpp_headers/scsi.o 00:03:18.517 CXX test/cpp_headers/scheduler.o 00:03:18.517 CC examples/vmd/led/led.o 00:03:18.517 CXX test/cpp_headers/scsi_spec.o 00:03:18.517 CC test/event/reactor_perf/reactor_perf.o 00:03:18.517 CXX test/cpp_headers/sock.o 00:03:18.517 CXX test/cpp_headers/stdinc.o 00:03:18.517 CXX test/cpp_headers/string.o 00:03:18.517 CXX test/cpp_headers/thread.o 00:03:18.517 CXX test/cpp_headers/trace.o 00:03:18.517 CC test/event/app_repeat/app_repeat.o 00:03:18.517 CXX test/cpp_headers/trace_parser.o 00:03:18.517 CXX test/cpp_headers/tree.o 00:03:18.517 CXX test/cpp_headers/ublk.o 00:03:18.517 CXX test/cpp_headers/util.o 00:03:18.517 CXX test/cpp_headers/uuid.o 00:03:18.517 CXX test/cpp_headers/version.o 00:03:18.517 CXX test/cpp_headers/vfio_user_pci.o 00:03:18.517 CXX test/cpp_headers/vfio_user_spec.o 00:03:18.517 CXX test/cpp_headers/vhost.o 00:03:18.517 CXX test/cpp_headers/vmd.o 00:03:18.517 CXX test/cpp_headers/xor.o 00:03:18.517 CXX test/cpp_headers/zipf.o 00:03:18.781 CC test/event/scheduler/scheduler.o 00:03:18.781 LINK spdk_nvme_perf 00:03:18.781 LINK event_perf 00:03:18.781 CC app/vhost/vhost.o 00:03:18.781 LINK mem_callbacks 00:03:18.781 LINK lsvmd 00:03:18.781 LINK reactor 00:03:18.781 LINK spdk_nvme_identify 00:03:18.781 LINK reactor_perf 00:03:18.781 LINK vhost_fuzz 00:03:18.781 LINK led 00:03:18.781 LINK spdk_top 00:03:18.781 LINK app_repeat 00:03:18.781 LINK thread 00:03:19.041 LINK hello_sock 00:03:19.041 CC test/nvme/aer/aer.o 00:03:19.041 CC test/nvme/reset/reset.o 00:03:19.041 CC test/nvme/sgl/sgl.o 00:03:19.041 CC test/nvme/startup/startup.o 00:03:19.041 CC test/nvme/e2edp/nvme_dp.o 00:03:19.041 CC test/nvme/reserve/reserve.o 00:03:19.041 CC test/nvme/overhead/overhead.o 00:03:19.041 CC test/nvme/simple_copy/simple_copy.o 00:03:19.041 CC test/nvme/err_injection/err_injection.o 00:03:19.041 CC test/nvme/connect_stress/connect_stress.o 00:03:19.041 CC test/nvme/boot_partition/boot_partition.o 00:03:19.041 CC test/accel/dif/dif.o 00:03:19.041 CC test/blobfs/mkfs/mkfs.o 00:03:19.041 CC test/nvme/compliance/nvme_compliance.o 00:03:19.041 CC test/nvme/fused_ordering/fused_ordering.o 00:03:19.041 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:19.041 CC test/nvme/fdp/fdp.o 00:03:19.041 CC test/nvme/cuse/cuse.o 00:03:19.041 LINK vhost 00:03:19.041 CC test/lvol/esnap/esnap.o 00:03:19.041 LINK idxd_perf 00:03:19.041 LINK scheduler 00:03:19.300 LINK boot_partition 00:03:19.300 LINK startup 00:03:19.300 LINK fused_ordering 00:03:19.300 LINK connect_stress 00:03:19.300 LINK err_injection 00:03:19.300 LINK reserve 00:03:19.300 LINK aer 00:03:19.300 LINK overhead 00:03:19.300 LINK doorbell_aers 00:03:19.300 LINK reset 00:03:19.300 LINK sgl 00:03:19.300 LINK simple_copy 00:03:19.300 LINK memory_ut 00:03:19.300 LINK mkfs 00:03:19.300 LINK fdp 00:03:19.300 CC examples/nvme/hotplug/hotplug.o 00:03:19.300 LINK nvme_dp 00:03:19.300 CC examples/nvme/reconnect/reconnect.o 00:03:19.300 CC examples/nvme/hello_world/hello_world.o 00:03:19.300 CC examples/nvme/arbitration/arbitration.o 00:03:19.300 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:19.559 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:19.559 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:19.559 CC examples/nvme/abort/abort.o 00:03:19.559 LINK nvme_compliance 00:03:19.559 CC examples/accel/perf/accel_perf.o 00:03:19.559 LINK dif 00:03:19.559 CC examples/blob/cli/blobcli.o 00:03:19.559 CC examples/blob/hello_world/hello_blob.o 00:03:19.559 LINK cmb_copy 00:03:19.880 LINK pmr_persistence 00:03:19.880 LINK hotplug 00:03:19.880 LINK hello_world 00:03:19.880 LINK abort 00:03:19.880 LINK reconnect 00:03:19.880 LINK hello_blob 00:03:19.880 LINK arbitration 00:03:19.880 LINK nvme_manage 00:03:20.173 CC test/bdev/bdevio/bdevio.o 00:03:20.173 LINK accel_perf 00:03:20.173 LINK blobcli 00:03:20.173 LINK iscsi_fuzz 00:03:20.431 LINK bdevio 00:03:20.431 CC examples/bdev/hello_world/hello_bdev.o 00:03:20.431 CC examples/bdev/bdevperf/bdevperf.o 00:03:20.431 LINK cuse 00:03:20.688 LINK hello_bdev 00:03:21.255 LINK bdevperf 00:03:21.513 CC examples/nvmf/nvmf/nvmf.o 00:03:22.079 LINK nvmf 00:03:24.610 LINK esnap 00:03:25.178 00:03:25.178 real 1m27.442s 00:03:25.178 user 21m25.079s 00:03:25.178 sys 3m4.154s 00:03:25.178 10:18:28 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:25.178 10:18:28 make -- common/autotest_common.sh@10 -- $ set +x 00:03:25.178 ************************************ 00:03:25.179 END TEST make 00:03:25.179 ************************************ 00:03:25.179 10:18:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:25.179 10:18:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:25.179 10:18:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:25.179 10:18:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.179 10:18:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:25.179 10:18:28 -- pm/common@44 -- $ pid=2203234 00:03:25.179 10:18:28 -- pm/common@50 -- $ kill -TERM 2203234 00:03:25.179 10:18:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.179 10:18:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:25.179 10:18:28 -- pm/common@44 -- $ pid=2203236 00:03:25.179 10:18:28 -- pm/common@50 -- $ kill -TERM 2203236 00:03:25.179 10:18:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.179 10:18:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:25.179 10:18:28 -- pm/common@44 -- $ pid=2203238 00:03:25.179 10:18:28 -- pm/common@50 -- $ kill -TERM 2203238 00:03:25.179 10:18:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.179 10:18:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:25.179 10:18:28 -- pm/common@44 -- $ pid=2203267 00:03:25.179 10:18:28 -- pm/common@50 -- $ sudo -E kill -TERM 2203267 00:03:25.179 10:18:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:03:25.179 10:18:28 -- nvmf/common.sh@7 -- # uname -s 00:03:25.179 10:18:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.179 10:18:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.179 10:18:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.179 10:18:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.179 10:18:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.179 10:18:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.179 10:18:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.179 10:18:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.179 10:18:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.179 10:18:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.179 10:18:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:03:25.179 10:18:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:03:25.179 10:18:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.179 10:18:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.179 10:18:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:25.179 10:18:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:25.179 10:18:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:03:25.179 10:18:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.179 10:18:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.179 10:18:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.179 10:18:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.179 10:18:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.179 10:18:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.179 10:18:28 -- paths/export.sh@5 -- # export PATH 00:03:25.179 10:18:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.179 10:18:28 -- nvmf/common.sh@47 -- # : 0 00:03:25.179 10:18:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:25.179 10:18:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:25.179 10:18:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:25.179 10:18:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.179 10:18:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.179 10:18:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:25.179 10:18:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:25.179 10:18:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:25.179 10:18:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.179 10:18:28 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.179 10:18:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.179 10:18:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.179 10:18:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:03:25.179 10:18:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.179 10:18:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:03:25.179 10:18:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.179 10:18:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.179 10:18:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.179 10:18:28 -- spdk/autotest.sh@48 -- # udevadm_pid=2266703 00:03:25.179 10:18:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.179 10:18:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:25.179 10:18:28 -- pm/common@17 -- # local monitor 00:03:25.179 10:18:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.179 10:18:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.179 10:18:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.179 10:18:28 -- pm/common@21 -- # date +%s 00:03:25.179 10:18:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.179 10:18:28 -- pm/common@21 -- # date +%s 00:03:25.179 10:18:28 -- pm/common@25 -- # sleep 1 00:03:25.179 10:18:28 -- pm/common@21 -- # date +%s 00:03:25.179 10:18:28 -- pm/common@21 -- # date +%s 00:03:25.179 10:18:28 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721895508 00:03:25.179 10:18:28 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721895508 00:03:25.179 10:18:28 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721895508 00:03:25.179 10:18:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721895508 00:03:25.179 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721895508_collect-vmstat.pm.log 00:03:25.179 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721895508_collect-cpu-load.pm.log 00:03:25.179 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721895508_collect-cpu-temp.pm.log 00:03:25.179 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721895508_collect-bmc-pm.bmc.pm.log 00:03:26.117 10:18:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:26.117 10:18:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:26.117 10:18:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:26.117 10:18:29 -- common/autotest_common.sh@10 -- # set +x 00:03:26.117 10:18:29 -- spdk/autotest.sh@59 -- # create_test_list 00:03:26.117 10:18:29 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:26.117 10:18:29 -- common/autotest_common.sh@10 -- # set +x 00:03:26.117 10:18:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:03:26.117 10:18:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:26.117 10:18:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:26.117 10:18:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:03:26.117 10:18:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:26.117 10:18:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:26.117 10:18:29 -- common/autotest_common.sh@1455 -- # uname 00:03:26.117 10:18:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:26.117 10:18:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:26.117 10:18:29 -- common/autotest_common.sh@1475 -- # uname 00:03:26.117 10:18:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:26.117 10:18:29 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:26.117 10:18:29 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:26.117 10:18:29 -- spdk/autotest.sh@72 -- # hash lcov 00:03:26.117 10:18:29 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:26.117 10:18:29 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:26.117 --rc lcov_branch_coverage=1 00:03:26.117 --rc lcov_function_coverage=1 00:03:26.117 --rc genhtml_branch_coverage=1 00:03:26.117 --rc genhtml_function_coverage=1 00:03:26.117 --rc genhtml_legend=1 00:03:26.117 --rc geninfo_all_blocks=1 00:03:26.117 ' 00:03:26.117 10:18:29 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:26.117 --rc lcov_branch_coverage=1 00:03:26.117 --rc lcov_function_coverage=1 00:03:26.117 --rc genhtml_branch_coverage=1 00:03:26.117 --rc genhtml_function_coverage=1 00:03:26.117 --rc genhtml_legend=1 00:03:26.117 --rc geninfo_all_blocks=1 00:03:26.117 ' 00:03:26.117 10:18:29 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:26.117 --rc lcov_branch_coverage=1 00:03:26.117 --rc lcov_function_coverage=1 00:03:26.117 --rc genhtml_branch_coverage=1 00:03:26.117 --rc genhtml_function_coverage=1 00:03:26.117 --rc genhtml_legend=1 00:03:26.117 --rc geninfo_all_blocks=1 00:03:26.117 --no-external' 00:03:26.117 10:18:29 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:26.117 --rc lcov_branch_coverage=1 00:03:26.117 --rc lcov_function_coverage=1 00:03:26.117 --rc genhtml_branch_coverage=1 00:03:26.117 --rc genhtml_function_coverage=1 00:03:26.117 --rc genhtml_legend=1 00:03:26.117 --rc geninfo_all_blocks=1 00:03:26.117 --no-external' 00:03:26.117 10:18:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:26.375 lcov: LCOV version 1.14 00:03:26.375 10:18:29 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:28.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:28.274 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:28.275 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:28.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:46.344 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:46.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:01.215 10:19:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:01.215 10:19:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:01.215 10:19:03 -- common/autotest_common.sh@10 -- # set +x 00:04:01.215 10:19:03 -- spdk/autotest.sh@91 -- # rm -f 00:04:01.215 10:19:03 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.215 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:01.215 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:01.215 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:01.215 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:01.215 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:01.215 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:01.215 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:01.215 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:01.215 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:04:01.215 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:01.215 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:01.215 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:01.215 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:01.215 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:01.215 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:01.215 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:01.215 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:01.474 10:19:04 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:01.474 10:19:04 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:01.474 10:19:04 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:01.474 10:19:04 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:01.474 10:19:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:01.474 10:19:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:01.474 10:19:04 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:01.474 10:19:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:01.474 10:19:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:01.474 10:19:04 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:01.474 10:19:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.474 10:19:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:01.474 10:19:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:01.474 10:19:04 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:01.474 10:19:04 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:01.474 No valid GPT data, bailing 00:04:01.474 10:19:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:01.474 10:19:05 -- scripts/common.sh@391 -- # pt= 00:04:01.474 10:19:05 -- scripts/common.sh@392 -- # return 1 00:04:01.474 10:19:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:01.474 1+0 records in 00:04:01.474 1+0 records out 00:04:01.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00250285 s, 419 MB/s 00:04:01.474 10:19:05 -- spdk/autotest.sh@118 -- # sync 00:04:01.474 10:19:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:01.474 10:19:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:01.474 10:19:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:03.374 10:19:06 -- spdk/autotest.sh@124 -- # uname -s 00:04:03.375 10:19:06 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:03.375 10:19:06 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:04:03.375 10:19:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.375 10:19:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.375 10:19:06 -- common/autotest_common.sh@10 -- # set +x 00:04:03.375 ************************************ 00:04:03.375 START TEST setup.sh 00:04:03.375 ************************************ 00:04:03.375 10:19:06 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:04:03.375 * Looking for test storage... 00:04:03.375 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:03.375 10:19:06 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:03.375 10:19:06 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:03.375 10:19:06 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:04:03.375 10:19:06 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.375 10:19:06 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.375 10:19:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:03.375 ************************************ 00:04:03.375 START TEST acl 00:04:03.375 ************************************ 00:04:03.375 10:19:06 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:04:03.375 * Looking for test storage... 00:04:03.375 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:03.375 10:19:06 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:03.375 10:19:06 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:03.375 10:19:06 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:03.375 10:19:06 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:03.375 10:19:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:03.375 10:19:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:03.375 10:19:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:03.375 10:19:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.375 10:19:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:03.375 10:19:06 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:03.375 10:19:06 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:03.375 10:19:06 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:03.375 10:19:06 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:03.375 10:19:06 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:03.375 10:19:06 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.375 10:19:06 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.779 10:19:08 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:04.779 10:19:08 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:04.779 10:19:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.779 10:19:08 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:04.779 10:19:08 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.779 10:19:08 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:04:05.769 Hugepages 00:04:05.769 node hugesize free / total 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 00:04:05.769 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.769 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:05.770 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.770 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.770 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.770 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:05.770 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.770 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.770 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.770 10:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:05.770 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:05.770 10:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:05.770 10:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.770 10:19:09 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:05.770 10:19:09 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:05.770 10:19:09 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.770 10:19:09 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.770 10:19:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:05.770 ************************************ 00:04:05.770 START TEST denied 00:04:05.770 ************************************ 00:04:05.770 10:19:09 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:05.770 10:19:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:04:05.770 10:19:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:05.770 10:19:09 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:04:05.770 10:19:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.770 10:19:09 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:07.671 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:04:07.671 10:19:10 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:04:07.671 10:19:10 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:07.671 10:19:10 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:07.671 10:19:10 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:04:07.671 10:19:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:04:07.671 10:19:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:07.671 10:19:10 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:07.671 10:19:10 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:07.671 10:19:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.671 10:19:10 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.570 00:04:09.570 real 0m3.761s 00:04:09.570 user 0m1.099s 00:04:09.570 sys 0m1.763s 00:04:09.570 10:19:13 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.570 10:19:13 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:09.570 ************************************ 00:04:09.570 END TEST denied 00:04:09.570 ************************************ 00:04:09.570 10:19:13 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:09.570 10:19:13 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.570 10:19:13 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.570 10:19:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:09.570 ************************************ 00:04:09.570 START TEST allowed 00:04:09.570 ************************************ 00:04:09.570 10:19:13 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:09.570 10:19:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:04:09.570 10:19:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:09.570 10:19:13 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:04:09.570 10:19:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.570 10:19:13 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:12.100 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:12.100 10:19:15 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:12.100 10:19:15 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:12.100 10:19:15 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:12.100 10:19:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.100 10:19:15 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.475 00:04:13.475 real 0m3.826s 00:04:13.475 user 0m1.005s 00:04:13.475 sys 0m1.668s 00:04:13.475 10:19:17 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.475 10:19:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:13.475 ************************************ 00:04:13.475 END TEST allowed 00:04:13.475 ************************************ 00:04:13.475 00:04:13.475 real 0m10.301s 00:04:13.475 user 0m3.143s 00:04:13.475 sys 0m5.183s 00:04:13.475 10:19:17 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.475 10:19:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:13.475 ************************************ 00:04:13.475 END TEST acl 00:04:13.475 ************************************ 00:04:13.475 10:19:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:04:13.475 10:19:17 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.475 10:19:17 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.475 10:19:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.475 ************************************ 00:04:13.475 START TEST hugepages 00:04:13.475 ************************************ 00:04:13.475 10:19:17 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:04:13.475 * Looking for test storage... 00:04:13.475 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38795596 kB' 'MemAvailable: 42717608 kB' 'Buffers: 2704 kB' 'Cached: 14861524 kB' 'SwapCached: 0 kB' 'Active: 11727772 kB' 'Inactive: 3693420 kB' 'Active(anon): 11288000 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560260 kB' 'Mapped: 203996 kB' 'Shmem: 10731036 kB' 'KReclaimable: 433256 kB' 'Slab: 826232 kB' 'SReclaimable: 433256 kB' 'SUnreclaim: 392976 kB' 'KernelStack: 12864 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 12428596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.475 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.476 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.735 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:13.736 10:19:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:13.736 10:19:17 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.736 10:19:17 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.736 10:19:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.736 ************************************ 00:04:13.736 START TEST default_setup 00:04:13.736 ************************************ 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.736 10:19:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:14.672 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:14.672 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:14.672 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:14.672 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:14.672 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:14.672 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:14.672 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:14.930 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:14.930 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:14.930 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:14.930 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:14.930 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:14.930 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:14.930 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:14.930 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:14.930 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:15.866 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40919024 kB' 'MemAvailable: 44840988 kB' 'Buffers: 2704 kB' 'Cached: 14861620 kB' 'SwapCached: 0 kB' 'Active: 11745048 kB' 'Inactive: 3693420 kB' 'Active(anon): 11305276 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577372 kB' 'Mapped: 204044 kB' 'Shmem: 10731132 kB' 'KReclaimable: 433208 kB' 'Slab: 825688 kB' 'SReclaimable: 433208 kB' 'SUnreclaim: 392480 kB' 'KernelStack: 12736 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12445796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.866 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.867 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.131 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40919044 kB' 'MemAvailable: 44841008 kB' 'Buffers: 2704 kB' 'Cached: 14861624 kB' 'SwapCached: 0 kB' 'Active: 11744728 kB' 'Inactive: 3693420 kB' 'Active(anon): 11304956 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577044 kB' 'Mapped: 204012 kB' 'Shmem: 10731136 kB' 'KReclaimable: 433208 kB' 'Slab: 825664 kB' 'SReclaimable: 433208 kB' 'SUnreclaim: 392456 kB' 'KernelStack: 12752 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12445812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.132 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40919572 kB' 'MemAvailable: 44841536 kB' 'Buffers: 2704 kB' 'Cached: 14861640 kB' 'SwapCached: 0 kB' 'Active: 11744900 kB' 'Inactive: 3693420 kB' 'Active(anon): 11305128 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577204 kB' 'Mapped: 204012 kB' 'Shmem: 10731152 kB' 'KReclaimable: 433208 kB' 'Slab: 825760 kB' 'SReclaimable: 433208 kB' 'SUnreclaim: 392552 kB' 'KernelStack: 12752 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12445836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.134 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.136 nr_hugepages=1024 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.136 resv_hugepages=0 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.136 surplus_hugepages=0 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.136 anon_hugepages=0 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40919524 kB' 'MemAvailable: 44841488 kB' 'Buffers: 2704 kB' 'Cached: 14861672 kB' 'SwapCached: 0 kB' 'Active: 11745016 kB' 'Inactive: 3693420 kB' 'Active(anon): 11305244 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577316 kB' 'Mapped: 204012 kB' 'Shmem: 10731184 kB' 'KReclaimable: 433208 kB' 'Slab: 825760 kB' 'SReclaimable: 433208 kB' 'SUnreclaim: 392552 kB' 'KernelStack: 12784 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12445856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18256788 kB' 'MemUsed: 14573096 kB' 'SwapCached: 0 kB' 'Active: 8198836 kB' 'Inactive: 3337460 kB' 'Active(anon): 7843080 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11271984 kB' 'Mapped: 163140 kB' 'AnonPages: 267452 kB' 'Shmem: 7578768 kB' 'KernelStack: 7464 kB' 'PageTables: 5340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154092 kB' 'Slab: 330700 kB' 'SReclaimable: 154092 kB' 'SUnreclaim: 176608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.139 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.140 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.140 10:19:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.140 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.140 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.140 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.140 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.140 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.140 node0=1024 expecting 1024 00:04:16.140 10:19:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.140 00:04:16.140 real 0m2.457s 00:04:16.140 user 0m0.604s 00:04:16.140 sys 0m0.915s 00:04:16.140 10:19:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.140 10:19:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:16.140 ************************************ 00:04:16.140 END TEST default_setup 00:04:16.140 ************************************ 00:04:16.140 10:19:19 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:16.140 10:19:19 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.140 10:19:19 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.140 10:19:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.140 ************************************ 00:04:16.140 START TEST per_node_1G_alloc 00:04:16.140 ************************************ 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.140 10:19:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:17.519 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:17.519 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:17.519 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:17.519 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:17.519 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:17.519 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:17.519 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:17.519 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:17.519 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:17.519 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:17.519 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:17.519 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:17.519 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:17.519 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:17.519 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:17.519 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:17.519 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.519 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40910484 kB' 'MemAvailable: 44832448 kB' 'Buffers: 2704 kB' 'Cached: 14861728 kB' 'SwapCached: 0 kB' 'Active: 11744820 kB' 'Inactive: 3693420 kB' 'Active(anon): 11305048 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576944 kB' 'Mapped: 204040 kB' 'Shmem: 10731240 kB' 'KReclaimable: 433208 kB' 'Slab: 825736 kB' 'SReclaimable: 433208 kB' 'SUnreclaim: 392528 kB' 'KernelStack: 12768 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12445904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197144 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.520 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40914212 kB' 'MemAvailable: 44836144 kB' 'Buffers: 2704 kB' 'Cached: 14861732 kB' 'SwapCached: 0 kB' 'Active: 11744528 kB' 'Inactive: 3693420 kB' 'Active(anon): 11304756 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576748 kB' 'Mapped: 204108 kB' 'Shmem: 10731244 kB' 'KReclaimable: 433176 kB' 'Slab: 825740 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392564 kB' 'KernelStack: 12768 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12445924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197096 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.521 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.522 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40913084 kB' 'MemAvailable: 44835016 kB' 'Buffers: 2704 kB' 'Cached: 14861748 kB' 'SwapCached: 0 kB' 'Active: 11744256 kB' 'Inactive: 3693420 kB' 'Active(anon): 11304484 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576392 kB' 'Mapped: 204032 kB' 'Shmem: 10731260 kB' 'KReclaimable: 433176 kB' 'Slab: 825752 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392576 kB' 'KernelStack: 12816 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12445944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.523 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.524 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.525 nr_hugepages=1024 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.525 resv_hugepages=0 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.525 surplus_hugepages=0 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.525 anon_hugepages=0 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40913336 kB' 'MemAvailable: 44835268 kB' 'Buffers: 2704 kB' 'Cached: 14861772 kB' 'SwapCached: 0 kB' 'Active: 11744252 kB' 'Inactive: 3693420 kB' 'Active(anon): 11304480 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576388 kB' 'Mapped: 204032 kB' 'Shmem: 10731284 kB' 'KReclaimable: 433176 kB' 'Slab: 825752 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392576 kB' 'KernelStack: 12816 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12445968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.525 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.526 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19298576 kB' 'MemUsed: 13531308 kB' 'SwapCached: 0 kB' 'Active: 8198716 kB' 'Inactive: 3337460 kB' 'Active(anon): 7842960 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11272084 kB' 'Mapped: 163160 kB' 'AnonPages: 267228 kB' 'Shmem: 7578868 kB' 'KernelStack: 7496 kB' 'PageTables: 5352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154060 kB' 'Slab: 330668 kB' 'SReclaimable: 154060 kB' 'SUnreclaim: 176608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.527 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.528 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 21615036 kB' 'MemUsed: 6096788 kB' 'SwapCached: 0 kB' 'Active: 3545564 kB' 'Inactive: 355960 kB' 'Active(anon): 3461548 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355960 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3592412 kB' 'Mapped: 40872 kB' 'AnonPages: 309160 kB' 'Shmem: 3152436 kB' 'KernelStack: 5320 kB' 'PageTables: 2936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279116 kB' 'Slab: 495084 kB' 'SReclaimable: 279116 kB' 'SUnreclaim: 215968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.529 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:17.530 node0=512 expecting 512 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:17.530 node1=512 expecting 512 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:17.530 00:04:17.530 real 0m1.427s 00:04:17.530 user 0m0.614s 00:04:17.530 sys 0m0.773s 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.530 10:19:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.530 ************************************ 00:04:17.530 END TEST per_node_1G_alloc 00:04:17.530 ************************************ 00:04:17.530 10:19:21 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:17.530 10:19:21 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.530 10:19:21 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.530 10:19:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.530 ************************************ 00:04:17.530 START TEST even_2G_alloc 00:04:17.530 ************************************ 00:04:17.530 10:19:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:17.530 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:17.530 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.530 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.530 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.530 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.530 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.530 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.530 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.531 10:19:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:18.910 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:18.910 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:18.910 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:18.910 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:18.910 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:18.910 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:18.910 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:18.910 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:18.911 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:18.911 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:18.911 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:18.911 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:18.911 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:18.911 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:18.911 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:18.911 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:18.911 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40918612 kB' 'MemAvailable: 44840544 kB' 'Buffers: 2704 kB' 'Cached: 14861872 kB' 'SwapCached: 0 kB' 'Active: 11744912 kB' 'Inactive: 3693420 kB' 'Active(anon): 11305140 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577400 kB' 'Mapped: 204512 kB' 'Shmem: 10731384 kB' 'KReclaimable: 433176 kB' 'Slab: 825740 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392564 kB' 'KernelStack: 12832 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12447532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.911 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40919524 kB' 'MemAvailable: 44841456 kB' 'Buffers: 2704 kB' 'Cached: 14861872 kB' 'SwapCached: 0 kB' 'Active: 11748104 kB' 'Inactive: 3693420 kB' 'Active(anon): 11308332 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580204 kB' 'Mapped: 204488 kB' 'Shmem: 10731384 kB' 'KReclaimable: 433176 kB' 'Slab: 825732 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392556 kB' 'KernelStack: 12864 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12450192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.912 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.913 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.914 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40915292 kB' 'MemAvailable: 44837224 kB' 'Buffers: 2704 kB' 'Cached: 14861892 kB' 'SwapCached: 0 kB' 'Active: 11749420 kB' 'Inactive: 3693420 kB' 'Active(anon): 11309648 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581440 kB' 'Mapped: 204488 kB' 'Shmem: 10731404 kB' 'KReclaimable: 433176 kB' 'Slab: 825752 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392576 kB' 'KernelStack: 12720 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12452336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197020 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.915 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.916 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.917 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:18.918 nr_hugepages=1024 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.918 resv_hugepages=0 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.918 surplus_hugepages=0 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.918 anon_hugepages=0 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.918 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40914288 kB' 'MemAvailable: 44836220 kB' 'Buffers: 2704 kB' 'Cached: 14861912 kB' 'SwapCached: 0 kB' 'Active: 11744400 kB' 'Inactive: 3693420 kB' 'Active(anon): 11304628 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576416 kB' 'Mapped: 204452 kB' 'Shmem: 10731424 kB' 'KReclaimable: 433176 kB' 'Slab: 825752 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392576 kB' 'KernelStack: 12768 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12445868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.919 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.920 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19301012 kB' 'MemUsed: 13528872 kB' 'SwapCached: 0 kB' 'Active: 8202140 kB' 'Inactive: 3337460 kB' 'Active(anon): 7846384 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11272216 kB' 'Mapped: 163180 kB' 'AnonPages: 270540 kB' 'Shmem: 7579000 kB' 'KernelStack: 7480 kB' 'PageTables: 5256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154060 kB' 'Slab: 330620 kB' 'SReclaimable: 154060 kB' 'SUnreclaim: 176560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.921 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.922 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.923 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 21613676 kB' 'MemUsed: 6098148 kB' 'SwapCached: 0 kB' 'Active: 3545720 kB' 'Inactive: 355960 kB' 'Active(anon): 3461704 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355960 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3592424 kB' 'Mapped: 41572 kB' 'AnonPages: 309344 kB' 'Shmem: 3152448 kB' 'KernelStack: 5304 kB' 'PageTables: 2940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279116 kB' 'Slab: 495132 kB' 'SReclaimable: 279116 kB' 'SUnreclaim: 216016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:18.924 node0=512 expecting 512 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:18.924 node1=512 expecting 512 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:18.924 00:04:18.924 real 0m1.322s 00:04:18.924 user 0m0.537s 00:04:18.924 sys 0m0.745s 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.924 10:19:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.924 ************************************ 00:04:18.924 END TEST even_2G_alloc 00:04:18.924 ************************************ 00:04:18.924 10:19:22 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:18.924 10:19:22 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.924 10:19:22 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.924 10:19:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.924 ************************************ 00:04:18.924 START TEST odd_alloc 00:04:18.924 ************************************ 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.924 10:19:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:20.311 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:20.311 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:20.311 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:20.311 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:20.311 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:20.311 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:20.311 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:20.311 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:20.311 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:20.311 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:20.311 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:20.311 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:20.311 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:20.311 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:20.311 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:20.311 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:20.311 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.311 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40930768 kB' 'MemAvailable: 44852700 kB' 'Buffers: 2704 kB' 'Cached: 14862000 kB' 'SwapCached: 0 kB' 'Active: 11741216 kB' 'Inactive: 3693420 kB' 'Active(anon): 11301444 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572776 kB' 'Mapped: 203060 kB' 'Shmem: 10731512 kB' 'KReclaimable: 433176 kB' 'Slab: 825428 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392252 kB' 'KernelStack: 12768 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12431216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.312 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40931056 kB' 'MemAvailable: 44852988 kB' 'Buffers: 2704 kB' 'Cached: 14862000 kB' 'SwapCached: 0 kB' 'Active: 11740672 kB' 'Inactive: 3693420 kB' 'Active(anon): 11300900 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572708 kB' 'Mapped: 203020 kB' 'Shmem: 10731512 kB' 'KReclaimable: 433176 kB' 'Slab: 825428 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392252 kB' 'KernelStack: 12800 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12431232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.313 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.314 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40931112 kB' 'MemAvailable: 44853044 kB' 'Buffers: 2704 kB' 'Cached: 14862020 kB' 'SwapCached: 0 kB' 'Active: 11740528 kB' 'Inactive: 3693420 kB' 'Active(anon): 11300756 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572484 kB' 'Mapped: 203020 kB' 'Shmem: 10731532 kB' 'KReclaimable: 433176 kB' 'Slab: 825460 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392284 kB' 'KernelStack: 12784 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12431252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.315 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.316 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:20.317 nr_hugepages=1025 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.317 resv_hugepages=0 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.317 surplus_hugepages=0 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.317 anon_hugepages=0 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40931112 kB' 'MemAvailable: 44853044 kB' 'Buffers: 2704 kB' 'Cached: 14862020 kB' 'SwapCached: 0 kB' 'Active: 11740236 kB' 'Inactive: 3693420 kB' 'Active(anon): 11300464 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572196 kB' 'Mapped: 203020 kB' 'Shmem: 10731532 kB' 'KReclaimable: 433176 kB' 'Slab: 825460 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392284 kB' 'KernelStack: 12784 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12431276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.317 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.318 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19324232 kB' 'MemUsed: 13505652 kB' 'SwapCached: 0 kB' 'Active: 8195948 kB' 'Inactive: 3337460 kB' 'Active(anon): 7840192 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11272272 kB' 'Mapped: 162608 kB' 'AnonPages: 264232 kB' 'Shmem: 7579056 kB' 'KernelStack: 7432 kB' 'PageTables: 5040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154060 kB' 'Slab: 330384 kB' 'SReclaimable: 154060 kB' 'SUnreclaim: 176324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.319 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 21607136 kB' 'MemUsed: 6104688 kB' 'SwapCached: 0 kB' 'Active: 3544620 kB' 'Inactive: 355960 kB' 'Active(anon): 3460604 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355960 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3592492 kB' 'Mapped: 40412 kB' 'AnonPages: 308252 kB' 'Shmem: 3152516 kB' 'KernelStack: 5352 kB' 'PageTables: 2968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279116 kB' 'Slab: 495076 kB' 'SReclaimable: 279116 kB' 'SUnreclaim: 215960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.320 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.321 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:20.322 node0=512 expecting 513 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:20.322 node1=513 expecting 512 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:20.322 00:04:20.322 real 0m1.363s 00:04:20.322 user 0m0.570s 00:04:20.322 sys 0m0.753s 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.322 10:19:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:20.322 ************************************ 00:04:20.322 END TEST odd_alloc 00:04:20.322 ************************************ 00:04:20.322 10:19:23 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:20.322 10:19:23 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.322 10:19:23 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.322 10:19:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.322 ************************************ 00:04:20.322 START TEST custom_alloc 00:04:20.322 ************************************ 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.322 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.323 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.323 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.323 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:20.323 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.323 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:20.323 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.323 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:20.323 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:20.323 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:20.323 10:19:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:20.323 10:19:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.323 10:19:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:21.703 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:21.703 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:21.703 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:21.703 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.703 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.703 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.703 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.703 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:21.703 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:21.703 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.703 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:21.703 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:21.703 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.703 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.703 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.703 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.703 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39870584 kB' 'MemAvailable: 43792516 kB' 'Buffers: 2704 kB' 'Cached: 14862136 kB' 'SwapCached: 0 kB' 'Active: 11742628 kB' 'Inactive: 3693420 kB' 'Active(anon): 11302856 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574392 kB' 'Mapped: 203068 kB' 'Shmem: 10731648 kB' 'KReclaimable: 433176 kB' 'Slab: 825364 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392188 kB' 'KernelStack: 13200 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12433844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197304 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.703 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39871736 kB' 'MemAvailable: 43793668 kB' 'Buffers: 2704 kB' 'Cached: 14862136 kB' 'SwapCached: 0 kB' 'Active: 11742040 kB' 'Inactive: 3693420 kB' 'Active(anon): 11302268 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574232 kB' 'Mapped: 203056 kB' 'Shmem: 10731648 kB' 'KReclaimable: 433176 kB' 'Slab: 825364 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392188 kB' 'KernelStack: 13184 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12433860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.704 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.705 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39872332 kB' 'MemAvailable: 43794264 kB' 'Buffers: 2704 kB' 'Cached: 14862156 kB' 'SwapCached: 0 kB' 'Active: 11741096 kB' 'Inactive: 3693420 kB' 'Active(anon): 11301324 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572804 kB' 'Mapped: 203056 kB' 'Shmem: 10731668 kB' 'KReclaimable: 433176 kB' 'Slab: 825656 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392480 kB' 'KernelStack: 12768 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12431524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.706 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.707 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:21.708 nr_hugepages=1536 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.708 resv_hugepages=0 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.708 surplus_hugepages=0 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.708 anon_hugepages=0 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.708 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39871576 kB' 'MemAvailable: 43793508 kB' 'Buffers: 2704 kB' 'Cached: 14862176 kB' 'SwapCached: 0 kB' 'Active: 11741144 kB' 'Inactive: 3693420 kB' 'Active(anon): 11301372 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572896 kB' 'Mapped: 203036 kB' 'Shmem: 10731688 kB' 'KReclaimable: 433176 kB' 'Slab: 825656 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392480 kB' 'KernelStack: 12816 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12431544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.709 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19319584 kB' 'MemUsed: 13510300 kB' 'SwapCached: 0 kB' 'Active: 8196472 kB' 'Inactive: 3337460 kB' 'Active(anon): 7840716 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11272352 kB' 'Mapped: 162624 kB' 'AnonPages: 264708 kB' 'Shmem: 7579136 kB' 'KernelStack: 7464 kB' 'PageTables: 5084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154060 kB' 'Slab: 330680 kB' 'SReclaimable: 154060 kB' 'SUnreclaim: 176620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.710 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 20552932 kB' 'MemUsed: 7158892 kB' 'SwapCached: 0 kB' 'Active: 3544368 kB' 'Inactive: 355960 kB' 'Active(anon): 3460352 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355960 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3592552 kB' 'Mapped: 40412 kB' 'AnonPages: 307860 kB' 'Shmem: 3152576 kB' 'KernelStack: 5336 kB' 'PageTables: 2912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279116 kB' 'Slab: 494976 kB' 'SReclaimable: 279116 kB' 'SUnreclaim: 215860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:21.713 node0=512 expecting 512 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:21.713 node1=1024 expecting 1024 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:21.713 00:04:21.713 real 0m1.343s 00:04:21.713 user 0m0.584s 00:04:21.713 sys 0m0.718s 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.713 10:19:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:21.713 ************************************ 00:04:21.713 END TEST custom_alloc 00:04:21.713 ************************************ 00:04:21.713 10:19:25 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:21.713 10:19:25 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.713 10:19:25 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.713 10:19:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.713 ************************************ 00:04:21.713 START TEST no_shrink_alloc 00:04:21.714 ************************************ 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.714 10:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:23.095 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:23.095 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:23.095 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:23.095 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:23.095 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:23.095 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:23.095 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:23.095 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:23.095 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:23.095 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:23.095 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:23.095 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:23.095 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:23.095 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:23.095 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:23.095 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:23.095 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40914876 kB' 'MemAvailable: 44836808 kB' 'Buffers: 2704 kB' 'Cached: 14862260 kB' 'SwapCached: 0 kB' 'Active: 11740780 kB' 'Inactive: 3693420 kB' 'Active(anon): 11301008 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572968 kB' 'Mapped: 203028 kB' 'Shmem: 10731772 kB' 'KReclaimable: 433176 kB' 'Slab: 825396 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392220 kB' 'KernelStack: 12832 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12431900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.096 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40917340 kB' 'MemAvailable: 44839272 kB' 'Buffers: 2704 kB' 'Cached: 14862264 kB' 'SwapCached: 0 kB' 'Active: 11741064 kB' 'Inactive: 3693420 kB' 'Active(anon): 11301292 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572828 kB' 'Mapped: 203048 kB' 'Shmem: 10731776 kB' 'KReclaimable: 433176 kB' 'Slab: 825376 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392200 kB' 'KernelStack: 12832 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12431916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196968 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40918044 kB' 'MemAvailable: 44839976 kB' 'Buffers: 2704 kB' 'Cached: 14862284 kB' 'SwapCached: 0 kB' 'Active: 11740996 kB' 'Inactive: 3693420 kB' 'Active(anon): 11301224 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572744 kB' 'Mapped: 203048 kB' 'Shmem: 10731796 kB' 'KReclaimable: 433176 kB' 'Slab: 825464 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392288 kB' 'KernelStack: 12816 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12431940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196968 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.101 nr_hugepages=1024 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.101 resv_hugepages=0 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.101 surplus_hugepages=0 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.101 anon_hugepages=0 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40918092 kB' 'MemAvailable: 44840024 kB' 'Buffers: 2704 kB' 'Cached: 14862304 kB' 'SwapCached: 0 kB' 'Active: 11740976 kB' 'Inactive: 3693420 kB' 'Active(anon): 11301204 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572744 kB' 'Mapped: 203048 kB' 'Shmem: 10731816 kB' 'KReclaimable: 433176 kB' 'Slab: 825464 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392288 kB' 'KernelStack: 12816 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12431960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196984 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18275480 kB' 'MemUsed: 14554404 kB' 'SwapCached: 0 kB' 'Active: 8197320 kB' 'Inactive: 3337460 kB' 'Active(anon): 7841564 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11272476 kB' 'Mapped: 162636 kB' 'AnonPages: 265516 kB' 'Shmem: 7579260 kB' 'KernelStack: 7496 kB' 'PageTables: 5136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154060 kB' 'Slab: 330492 kB' 'SReclaimable: 154060 kB' 'SUnreclaim: 176432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.408 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.409 node0=1024 expecting 1024 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.409 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:24.347 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.347 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.347 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.347 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.347 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.347 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.347 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.347 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:24.347 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.347 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:24.347 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.347 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.347 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.347 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.347 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.347 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.347 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:24.347 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:24.347 10:19:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:24.347 10:19:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.347 10:19:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.347 10:19:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.347 10:19:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.347 10:19:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.347 10:19:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.347 10:19:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40893604 kB' 'MemAvailable: 44815536 kB' 'Buffers: 2704 kB' 'Cached: 14862376 kB' 'SwapCached: 0 kB' 'Active: 11741360 kB' 'Inactive: 3693420 kB' 'Active(anon): 11301588 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573056 kB' 'Mapped: 203068 kB' 'Shmem: 10731888 kB' 'KReclaimable: 433176 kB' 'Slab: 825284 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392108 kB' 'KernelStack: 12832 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12432012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197144 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.347 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.348 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40897700 kB' 'MemAvailable: 44819632 kB' 'Buffers: 2704 kB' 'Cached: 14862376 kB' 'SwapCached: 0 kB' 'Active: 11741480 kB' 'Inactive: 3693420 kB' 'Active(anon): 11301708 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573128 kB' 'Mapped: 203052 kB' 'Shmem: 10731888 kB' 'KReclaimable: 433176 kB' 'Slab: 825264 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392088 kB' 'KernelStack: 12848 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12432032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197128 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.349 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.350 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40898372 kB' 'MemAvailable: 44820304 kB' 'Buffers: 2704 kB' 'Cached: 14862376 kB' 'SwapCached: 0 kB' 'Active: 11741016 kB' 'Inactive: 3693420 kB' 'Active(anon): 11301244 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572664 kB' 'Mapped: 203052 kB' 'Shmem: 10731888 kB' 'KReclaimable: 433176 kB' 'Slab: 825320 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392144 kB' 'KernelStack: 12848 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12432052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197128 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.351 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.612 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.613 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.614 nr_hugepages=1024 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.614 resv_hugepages=0 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.614 surplus_hugepages=0 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.614 anon_hugepages=0 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40897696 kB' 'MemAvailable: 44819628 kB' 'Buffers: 2704 kB' 'Cached: 14862384 kB' 'SwapCached: 0 kB' 'Active: 11741064 kB' 'Inactive: 3693420 kB' 'Active(anon): 11301292 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572744 kB' 'Mapped: 203052 kB' 'Shmem: 10731896 kB' 'KReclaimable: 433176 kB' 'Slab: 825320 kB' 'SReclaimable: 433176 kB' 'SUnreclaim: 392144 kB' 'KernelStack: 12848 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12432076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197128 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1871452 kB' 'DirectMap2M: 19019776 kB' 'DirectMap1G: 48234496 kB' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.614 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.615 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18271980 kB' 'MemUsed: 14557904 kB' 'SwapCached: 0 kB' 'Active: 8197228 kB' 'Inactive: 3337460 kB' 'Active(anon): 7841472 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11272576 kB' 'Mapped: 162640 kB' 'AnonPages: 265264 kB' 'Shmem: 7579360 kB' 'KernelStack: 7480 kB' 'PageTables: 4992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154060 kB' 'Slab: 330396 kB' 'SReclaimable: 154060 kB' 'SUnreclaim: 176336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.616 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.617 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:24.618 node0=1024 expecting 1024 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.618 00:04:24.618 real 0m2.745s 00:04:24.618 user 0m1.158s 00:04:24.618 sys 0m1.508s 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.618 10:19:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.618 ************************************ 00:04:24.618 END TEST no_shrink_alloc 00:04:24.618 ************************************ 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:24.618 10:19:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:24.618 00:04:24.618 real 0m11.039s 00:04:24.618 user 0m4.242s 00:04:24.618 sys 0m5.641s 00:04:24.618 10:19:28 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.618 10:19:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.618 ************************************ 00:04:24.618 END TEST hugepages 00:04:24.618 ************************************ 00:04:24.618 10:19:28 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:24.618 10:19:28 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.618 10:19:28 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.618 10:19:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.618 ************************************ 00:04:24.618 START TEST driver 00:04:24.618 ************************************ 00:04:24.618 10:19:28 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:24.618 * Looking for test storage... 00:04:24.618 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:24.618 10:19:28 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:24.618 10:19:28 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.618 10:19:28 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.151 10:19:30 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:27.151 10:19:30 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.151 10:19:30 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.151 10:19:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:27.151 ************************************ 00:04:27.151 START TEST guess_driver 00:04:27.151 ************************************ 00:04:27.151 10:19:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:27.151 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:27.151 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:27.151 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:27.151 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:27.151 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:27.151 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:27.152 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:27.152 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:27.152 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:27.152 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:27.152 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:27.152 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:27.152 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:27.152 Looking for driver=vfio-pci 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.152 10:19:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.088 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.347 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.347 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.347 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.347 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.347 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.347 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.347 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.347 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.347 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.347 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.347 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.348 10:19:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.285 10:19:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.285 10:19:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.285 10:19:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.285 10:19:32 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:29.285 10:19:32 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:29.285 10:19:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.285 10:19:32 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.814 00:04:31.814 real 0m4.758s 00:04:31.814 user 0m1.027s 00:04:31.814 sys 0m1.753s 00:04:31.814 10:19:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.814 10:19:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:31.814 ************************************ 00:04:31.814 END TEST guess_driver 00:04:31.814 ************************************ 00:04:31.814 00:04:31.814 real 0m7.232s 00:04:31.814 user 0m1.582s 00:04:31.814 sys 0m2.682s 00:04:31.814 10:19:35 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.814 10:19:35 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:31.814 ************************************ 00:04:31.814 END TEST driver 00:04:31.814 ************************************ 00:04:31.814 10:19:35 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:31.814 10:19:35 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.814 10:19:35 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.814 10:19:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:31.814 ************************************ 00:04:31.814 START TEST devices 00:04:31.814 ************************************ 00:04:31.814 10:19:35 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:31.814 * Looking for test storage... 00:04:31.814 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:31.814 10:19:35 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:31.814 10:19:35 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:32.071 10:19:35 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.071 10:19:35 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.447 10:19:36 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:33.447 10:19:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:33.447 10:19:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:33.447 10:19:36 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:33.447 10:19:36 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:33.447 10:19:36 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:33.447 10:19:36 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:33.447 10:19:36 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:33.448 10:19:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:33.448 10:19:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:33.448 10:19:36 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:33.448 No valid GPT data, bailing 00:04:33.448 10:19:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:33.448 10:19:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:33.448 10:19:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:33.448 10:19:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:33.448 10:19:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:33.448 10:19:36 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:33.448 10:19:36 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:33.448 10:19:36 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.448 10:19:36 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.448 10:19:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:33.448 ************************************ 00:04:33.448 START TEST nvme_mount 00:04:33.448 ************************************ 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.448 10:19:36 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:34.382 Creating new GPT entries in memory. 00:04:34.382 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.382 other utilities. 00:04:34.382 10:19:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.382 10:19:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.382 10:19:37 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.382 10:19:37 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.382 10:19:37 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:35.319 Creating new GPT entries in memory. 00:04:35.319 The operation has completed successfully. 00:04:35.319 10:19:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:35.319 10:19:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.319 10:19:39 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2286653 00:04:35.319 10:19:39 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.319 10:19:39 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:35.319 10:19:39 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.319 10:19:39 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:35.319 10:19:39 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.577 10:19:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.513 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.773 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.773 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:36.773 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.773 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.773 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.773 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:36.773 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.773 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.773 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.773 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:36.773 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:36.773 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.773 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.031 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:37.031 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:37.031 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:37.031 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.032 10:19:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.408 10:19:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.345 10:19:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.603 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.603 00:04:39.603 real 0m6.317s 00:04:39.603 user 0m1.473s 00:04:39.603 sys 0m2.408s 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.603 10:19:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:39.603 ************************************ 00:04:39.603 END TEST nvme_mount 00:04:39.603 ************************************ 00:04:39.872 10:19:43 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:39.872 10:19:43 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.872 10:19:43 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.872 10:19:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:39.872 ************************************ 00:04:39.872 START TEST dm_mount 00:04:39.872 ************************************ 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:39.872 10:19:43 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:40.813 Creating new GPT entries in memory. 00:04:40.813 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:40.813 other utilities. 00:04:40.813 10:19:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:40.813 10:19:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.813 10:19:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.813 10:19:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.813 10:19:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:41.751 Creating new GPT entries in memory. 00:04:41.751 The operation has completed successfully. 00:04:41.751 10:19:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.751 10:19:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.751 10:19:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.751 10:19:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.751 10:19:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:42.687 The operation has completed successfully. 00:04:42.687 10:19:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.687 10:19:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.687 10:19:46 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2289046 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.946 10:19:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:43.881 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.881 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:43.882 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.140 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.140 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:44.140 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:44.140 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:44.140 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:44.140 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:44.141 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:44.141 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:44.141 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:44.141 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:44.141 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:44.141 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.141 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.141 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.141 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.141 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:44.141 10:19:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.141 10:19:47 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.141 10:19:47 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:45.102 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.362 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.362 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:45.362 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:45.362 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:45.362 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:45.362 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:45.362 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:45.362 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.362 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:45.362 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:45.362 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:45.362 10:19:48 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:45.362 00:04:45.362 real 0m5.635s 00:04:45.362 user 0m0.902s 00:04:45.362 sys 0m1.579s 00:04:45.362 10:19:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.362 10:19:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:45.362 ************************************ 00:04:45.362 END TEST dm_mount 00:04:45.362 ************************************ 00:04:45.362 10:19:48 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:45.362 10:19:48 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:45.362 10:19:48 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.362 10:19:48 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.362 10:19:48 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:45.362 10:19:49 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:45.362 10:19:49 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:45.621 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:45.621 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:45.621 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:45.621 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:45.621 10:19:49 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:45.621 10:19:49 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:45.621 10:19:49 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:45.621 10:19:49 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.621 10:19:49 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:45.621 10:19:49 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:45.621 10:19:49 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:45.621 00:04:45.621 real 0m13.806s 00:04:45.621 user 0m3.002s 00:04:45.621 sys 0m4.977s 00:04:45.621 10:19:49 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.621 10:19:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:45.621 ************************************ 00:04:45.621 END TEST devices 00:04:45.621 ************************************ 00:04:45.621 00:04:45.621 real 0m42.617s 00:04:45.621 user 0m12.067s 00:04:45.621 sys 0m18.640s 00:04:45.621 10:19:49 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.621 10:19:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:45.621 ************************************ 00:04:45.621 END TEST setup.sh 00:04:45.621 ************************************ 00:04:45.621 10:19:49 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:04:46.996 Hugepages 00:04:46.996 node hugesize free / total 00:04:46.996 node0 1048576kB 0 / 0 00:04:46.996 node0 2048kB 1024 / 1024 00:04:46.996 node1 1048576kB 0 / 0 00:04:46.996 node1 2048kB 1024 / 1024 00:04:46.996 00:04:46.996 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:46.996 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:46.996 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:46.996 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:46.996 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:46.996 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:46.996 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:46.996 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:46.996 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:46.996 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:46.996 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:46.996 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:46.996 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:46.996 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:46.996 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:46.996 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:46.996 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:46.996 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:46.996 10:19:50 -- spdk/autotest.sh@130 -- # uname -s 00:04:46.996 10:19:50 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:46.996 10:19:50 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:46.996 10:19:50 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:48.369 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:48.369 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:48.369 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:48.369 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:48.369 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:48.369 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:48.369 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:48.369 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:48.369 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:48.369 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:48.369 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:48.369 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:48.369 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:48.369 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:48.369 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:48.369 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:49.307 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.307 10:19:52 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:50.260 10:19:53 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:50.260 10:19:53 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:50.260 10:19:53 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:50.260 10:19:53 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:50.260 10:19:53 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:50.260 10:19:53 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:50.260 10:19:53 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:50.260 10:19:53 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:50.260 10:19:53 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:50.518 10:19:54 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:50.518 10:19:54 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:04:50.518 10:19:54 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:51.451 Waiting for block devices as requested 00:04:51.451 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:51.451 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:51.709 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:51.709 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:51.709 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:51.967 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:51.967 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:51.967 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:51.967 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:04:52.225 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:52.225 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:52.225 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:52.483 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:52.483 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:52.483 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:52.483 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:52.742 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:52.742 10:19:56 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:52.742 10:19:56 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:04:52.742 10:19:56 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:52.742 10:19:56 -- common/autotest_common.sh@1502 -- # grep 0000:0b:00.0/nvme/nvme 00:04:52.742 10:19:56 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:52.742 10:19:56 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:04:52.742 10:19:56 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:52.742 10:19:56 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:52.742 10:19:56 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:52.742 10:19:56 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:52.742 10:19:56 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:52.742 10:19:56 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:52.742 10:19:56 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:52.742 10:19:56 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:52.742 10:19:56 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:52.742 10:19:56 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:52.742 10:19:56 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:52.742 10:19:56 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:52.742 10:19:56 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:52.742 10:19:56 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:52.742 10:19:56 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:52.742 10:19:56 -- common/autotest_common.sh@1557 -- # continue 00:04:52.742 10:19:56 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:52.742 10:19:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.742 10:19:56 -- common/autotest_common.sh@10 -- # set +x 00:04:52.742 10:19:56 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:52.742 10:19:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.742 10:19:56 -- common/autotest_common.sh@10 -- # set +x 00:04:52.742 10:19:56 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:54.117 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:54.117 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:54.117 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:54.117 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:54.117 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:54.117 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:54.117 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:54.117 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:54.117 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:54.117 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:54.117 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:54.117 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:54.117 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:54.117 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:54.117 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:54.117 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:55.051 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:55.310 10:19:58 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:55.310 10:19:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.310 10:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:55.310 10:19:58 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:55.310 10:19:58 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:55.310 10:19:58 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:55.310 10:19:58 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:55.310 10:19:58 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:55.310 10:19:58 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:55.310 10:19:58 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:55.310 10:19:58 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:55.310 10:19:58 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.310 10:19:58 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:55.310 10:19:58 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:55.310 10:19:58 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:55.310 10:19:58 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:04:55.310 10:19:58 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:55.310 10:19:58 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:04:55.310 10:19:58 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:55.310 10:19:58 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:55.310 10:19:58 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:55.310 10:19:58 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:0b:00.0 00:04:55.310 10:19:58 -- common/autotest_common.sh@1592 -- # [[ -z 0000:0b:00.0 ]] 00:04:55.310 10:19:58 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2294229 00:04:55.310 10:19:58 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.310 10:19:58 -- common/autotest_common.sh@1598 -- # waitforlisten 2294229 00:04:55.310 10:19:58 -- common/autotest_common.sh@831 -- # '[' -z 2294229 ']' 00:04:55.310 10:19:58 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.310 10:19:58 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.310 10:19:58 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.310 10:19:58 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.310 10:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:55.310 [2024-07-25 10:19:58.939264] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:04:55.310 [2024-07-25 10:19:58.939354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2294229 ] 00:04:55.310 [2024-07-25 10:19:59.001311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.568 [2024-07-25 10:19:59.119342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.502 10:19:59 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.502 10:19:59 -- common/autotest_common.sh@864 -- # return 0 00:04:56.502 10:19:59 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:56.502 10:19:59 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:56.502 10:19:59 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:04:59.796 nvme0n1 00:04:59.796 10:20:02 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:59.796 [2024-07-25 10:20:03.187778] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:59.796 [2024-07-25 10:20:03.187824] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:59.796 request: 00:04:59.796 { 00:04:59.796 "nvme_ctrlr_name": "nvme0", 00:04:59.796 "password": "test", 00:04:59.796 "method": "bdev_nvme_opal_revert", 00:04:59.796 "req_id": 1 00:04:59.796 } 00:04:59.796 Got JSON-RPC error response 00:04:59.796 response: 00:04:59.796 { 00:04:59.796 "code": -32603, 00:04:59.796 "message": "Internal error" 00:04:59.796 } 00:04:59.796 10:20:03 -- common/autotest_common.sh@1604 -- # true 00:04:59.796 10:20:03 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:59.796 10:20:03 -- common/autotest_common.sh@1608 -- # killprocess 2294229 00:04:59.796 10:20:03 -- common/autotest_common.sh@950 -- # '[' -z 2294229 ']' 00:04:59.796 10:20:03 -- common/autotest_common.sh@954 -- # kill -0 2294229 00:04:59.796 10:20:03 -- common/autotest_common.sh@955 -- # uname 00:04:59.796 10:20:03 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.796 10:20:03 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2294229 00:04:59.796 10:20:03 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.796 10:20:03 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.796 10:20:03 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2294229' 00:04:59.796 killing process with pid 2294229 00:04:59.796 10:20:03 -- common/autotest_common.sh@969 -- # kill 2294229 00:04:59.796 10:20:03 -- common/autotest_common.sh@974 -- # wait 2294229 00:05:01.701 10:20:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:01.701 10:20:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:01.701 10:20:05 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:05:01.701 10:20:05 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:05:01.701 10:20:05 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:05:01.701 Restarting all devices. 00:05:01.701 lstat() error: No such file or directory 00:05:01.701 QAT Error: No GENERAL section found 00:05:01.701 Failed to configure qat_dev0 00:05:01.701 lstat() error: No such file or directory 00:05:01.701 QAT Error: No GENERAL section found 00:05:01.701 Failed to configure qat_dev1 00:05:01.701 lstat() error: No such file or directory 00:05:01.701 QAT Error: No GENERAL section found 00:05:01.701 Failed to configure qat_dev2 00:05:01.701 enable sriov 00:05:01.960 Checking status of all devices. 00:05:01.960 There is 3 QAT acceleration device(s) in the system: 00:05:01.960 qat_dev0 - type: c6xx, inst_id: 0, node_id: 1, bsf: 0000:84:00.0, #accel: 5 #engines: 10 state: down 00:05:01.960 qat_dev1 - type: c6xx, inst_id: 1, node_id: 1, bsf: 0000:85:00.0, #accel: 5 #engines: 10 state: down 00:05:01.960 qat_dev2 - type: c6xx, inst_id: 2, node_id: 1, bsf: 0000:86:00.0, #accel: 5 #engines: 10 state: down 00:05:02.527 0000:84:00.0 set to 16 VFs 00:05:03.463 0000:85:00.0 set to 16 VFs 00:05:04.410 0000:86:00.0 set to 16 VFs 00:05:05.382 Properly configured the qat device with driver uio_pci_generic. 00:05:05.382 10:20:08 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:05.382 10:20:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.382 10:20:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.382 10:20:08 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:05.382 10:20:08 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:05.382 10:20:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.382 10:20:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.382 10:20:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.382 ************************************ 00:05:05.382 START TEST env 00:05:05.382 ************************************ 00:05:05.382 10:20:08 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:05.382 * Looking for test storage... 00:05:05.382 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:05:05.382 10:20:08 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:05.382 10:20:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.382 10:20:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.382 10:20:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.382 ************************************ 00:05:05.382 START TEST env_memory 00:05:05.382 ************************************ 00:05:05.382 10:20:08 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:05.382 00:05:05.382 00:05:05.382 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.382 http://cunit.sourceforge.net/ 00:05:05.382 00:05:05.382 00:05:05.382 Suite: memory 00:05:05.382 Test: alloc and free memory map ...[2024-07-25 10:20:08.916021] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.382 passed 00:05:05.382 Test: mem map translation ...[2024-07-25 10:20:08.937871] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.382 [2024-07-25 10:20:08.937895] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.382 [2024-07-25 10:20:08.937938] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.382 [2024-07-25 10:20:08.937950] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.382 passed 00:05:05.382 Test: mem map registration ...[2024-07-25 10:20:08.982007] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:05.382 [2024-07-25 10:20:08.982027] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:05.382 passed 00:05:05.382 Test: mem map adjacent registrations ...passed 00:05:05.382 00:05:05.382 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.382 suites 1 1 n/a 0 0 00:05:05.382 tests 4 4 4 0 0 00:05:05.382 asserts 152 152 152 0 n/a 00:05:05.382 00:05:05.382 Elapsed time = 0.147 seconds 00:05:05.382 00:05:05.382 real 0m0.153s 00:05:05.382 user 0m0.145s 00:05:05.382 sys 0m0.008s 00:05:05.382 10:20:09 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.382 10:20:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:05.382 ************************************ 00:05:05.382 END TEST env_memory 00:05:05.382 ************************************ 00:05:05.383 10:20:09 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.383 10:20:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.383 10:20:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.383 10:20:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.642 ************************************ 00:05:05.642 START TEST env_vtophys 00:05:05.642 ************************************ 00:05:05.642 10:20:09 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.642 EAL: lib.eal log level changed from notice to debug 00:05:05.642 EAL: Detected lcore 0 as core 0 on socket 0 00:05:05.642 EAL: Detected lcore 1 as core 1 on socket 0 00:05:05.642 EAL: Detected lcore 2 as core 2 on socket 0 00:05:05.642 EAL: Detected lcore 3 as core 3 on socket 0 00:05:05.642 EAL: Detected lcore 4 as core 4 on socket 0 00:05:05.642 EAL: Detected lcore 5 as core 5 on socket 0 00:05:05.642 EAL: Detected lcore 6 as core 8 on socket 0 00:05:05.642 EAL: Detected lcore 7 as core 9 on socket 0 00:05:05.642 EAL: Detected lcore 8 as core 10 on socket 0 00:05:05.642 EAL: Detected lcore 9 as core 11 on socket 0 00:05:05.642 EAL: Detected lcore 10 as core 12 on socket 0 00:05:05.642 EAL: Detected lcore 11 as core 13 on socket 0 00:05:05.642 EAL: Detected lcore 12 as core 0 on socket 1 00:05:05.642 EAL: Detected lcore 13 as core 1 on socket 1 00:05:05.642 EAL: Detected lcore 14 as core 2 on socket 1 00:05:05.642 EAL: Detected lcore 15 as core 3 on socket 1 00:05:05.642 EAL: Detected lcore 16 as core 4 on socket 1 00:05:05.642 EAL: Detected lcore 17 as core 5 on socket 1 00:05:05.642 EAL: Detected lcore 18 as core 8 on socket 1 00:05:05.642 EAL: Detected lcore 19 as core 9 on socket 1 00:05:05.642 EAL: Detected lcore 20 as core 10 on socket 1 00:05:05.642 EAL: Detected lcore 21 as core 11 on socket 1 00:05:05.642 EAL: Detected lcore 22 as core 12 on socket 1 00:05:05.642 EAL: Detected lcore 23 as core 13 on socket 1 00:05:05.642 EAL: Detected lcore 24 as core 0 on socket 0 00:05:05.642 EAL: Detected lcore 25 as core 1 on socket 0 00:05:05.642 EAL: Detected lcore 26 as core 2 on socket 0 00:05:05.642 EAL: Detected lcore 27 as core 3 on socket 0 00:05:05.642 EAL: Detected lcore 28 as core 4 on socket 0 00:05:05.642 EAL: Detected lcore 29 as core 5 on socket 0 00:05:05.642 EAL: Detected lcore 30 as core 8 on socket 0 00:05:05.642 EAL: Detected lcore 31 as core 9 on socket 0 00:05:05.642 EAL: Detected lcore 32 as core 10 on socket 0 00:05:05.642 EAL: Detected lcore 33 as core 11 on socket 0 00:05:05.642 EAL: Detected lcore 34 as core 12 on socket 0 00:05:05.642 EAL: Detected lcore 35 as core 13 on socket 0 00:05:05.642 EAL: Detected lcore 36 as core 0 on socket 1 00:05:05.642 EAL: Detected lcore 37 as core 1 on socket 1 00:05:05.642 EAL: Detected lcore 38 as core 2 on socket 1 00:05:05.642 EAL: Detected lcore 39 as core 3 on socket 1 00:05:05.642 EAL: Detected lcore 40 as core 4 on socket 1 00:05:05.642 EAL: Detected lcore 41 as core 5 on socket 1 00:05:05.642 EAL: Detected lcore 42 as core 8 on socket 1 00:05:05.642 EAL: Detected lcore 43 as core 9 on socket 1 00:05:05.642 EAL: Detected lcore 44 as core 10 on socket 1 00:05:05.642 EAL: Detected lcore 45 as core 11 on socket 1 00:05:05.642 EAL: Detected lcore 46 as core 12 on socket 1 00:05:05.642 EAL: Detected lcore 47 as core 13 on socket 1 00:05:05.642 EAL: Maximum logical cores by configuration: 128 00:05:05.642 EAL: Detected CPU lcores: 48 00:05:05.642 EAL: Detected NUMA nodes: 2 00:05:05.642 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:05.642 EAL: Detected shared linkage of DPDK 00:05:05.642 EAL: No shared files mode enabled, IPC will be disabled 00:05:05.642 EAL: No shared files mode enabled, IPC is disabled 00:05:05.642 EAL: PCI driver qat for device 0000:84:01.0 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:01.1 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:01.2 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:01.3 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:01.4 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:01.5 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:01.6 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:01.7 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:02.0 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:02.1 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:02.2 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:02.3 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:02.4 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:02.5 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:02.6 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:84:02.7 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:01.0 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:01.1 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:01.2 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:01.3 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:01.4 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:01.5 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:01.6 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:01.7 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:02.0 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:02.1 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:02.2 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:02.3 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:02.4 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:02.5 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:02.6 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:85:02.7 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:86:01.0 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:86:01.1 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:86:01.2 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:86:01.3 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:86:01.4 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:86:01.5 wants IOVA as 'PA' 00:05:05.642 EAL: PCI driver qat for device 0000:86:01.6 wants IOVA as 'PA' 00:05:05.643 EAL: PCI driver qat for device 0000:86:01.7 wants IOVA as 'PA' 00:05:05.643 EAL: PCI driver qat for device 0000:86:02.0 wants IOVA as 'PA' 00:05:05.643 EAL: PCI driver qat for device 0000:86:02.1 wants IOVA as 'PA' 00:05:05.643 EAL: PCI driver qat for device 0000:86:02.2 wants IOVA as 'PA' 00:05:05.643 EAL: PCI driver qat for device 0000:86:02.3 wants IOVA as 'PA' 00:05:05.643 EAL: PCI driver qat for device 0000:86:02.4 wants IOVA as 'PA' 00:05:05.643 EAL: PCI driver qat for device 0000:86:02.5 wants IOVA as 'PA' 00:05:05.643 EAL: PCI driver qat for device 0000:86:02.6 wants IOVA as 'PA' 00:05:05.643 EAL: PCI driver qat for device 0000:86:02.7 wants IOVA as 'PA' 00:05:05.643 EAL: Bus pci wants IOVA as 'PA' 00:05:05.643 EAL: Bus auxiliary wants IOVA as 'DC' 00:05:05.643 EAL: Bus vdev wants IOVA as 'DC' 00:05:05.643 EAL: Selected IOVA mode 'PA' 00:05:05.643 EAL: Probing VFIO support... 00:05:05.643 EAL: IOMMU type 1 (Type 1) is supported 00:05:05.643 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:05.643 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:05.643 EAL: VFIO support initialized 00:05:05.643 EAL: Ask a virtual area of 0x2e000 bytes 00:05:05.643 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:05.643 EAL: Setting up physically contiguous memory... 00:05:05.643 EAL: Setting maximum number of open files to 524288 00:05:05.643 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:05.643 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:05.643 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:05.643 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.643 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:05.643 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.643 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.643 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:05.643 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:05.643 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.643 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:05.643 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.643 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.643 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:05.643 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:05.643 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.643 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:05.643 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.643 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.643 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:05.643 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:05.643 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.643 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:05.643 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.643 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.643 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:05.643 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:05.643 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:05.643 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.643 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:05.643 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.643 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.643 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:05.643 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:05.643 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.643 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:05.643 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.643 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.643 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:05.643 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:05.643 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.643 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:05.643 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.643 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.643 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:05.643 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:05.643 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.643 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:05.643 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.643 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.643 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:05.643 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:05.643 EAL: Hugepages will be freed exactly as allocated. 00:05:05.643 EAL: No shared files mode enabled, IPC is disabled 00:05:05.643 EAL: No shared files mode enabled, IPC is disabled 00:05:05.643 EAL: TSC frequency is ~2700000 KHz 00:05:05.643 EAL: Main lcore 0 is ready (tid=7fd8b4254b00;cpuset=[0]) 00:05:05.643 EAL: Trying to obtain current memory policy. 00:05:05.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.643 EAL: Restoring previous memory policy: 0 00:05:05.643 EAL: request: mp_malloc_sync 00:05:05.643 EAL: No shared files mode enabled, IPC is disabled 00:05:05.643 EAL: Heap on socket 0 was expanded by 2MB 00:05:05.643 EAL: PCI device 0000:84:01.0 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x202001000000 00:05:05.643 EAL: PCI memory mapped at 0x202001001000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.0 (socket 1) 00:05:05.643 EAL: Trying to obtain current memory policy. 00:05:05.643 EAL: Setting policy MPOL_PREFERRED for socket 1 00:05:05.643 EAL: Restoring previous memory policy: 4 00:05:05.643 EAL: request: mp_malloc_sync 00:05:05.643 EAL: No shared files mode enabled, IPC is disabled 00:05:05.643 EAL: Heap on socket 1 was expanded by 2MB 00:05:05.643 EAL: PCI device 0000:84:01.1 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x202001002000 00:05:05.643 EAL: PCI memory mapped at 0x202001003000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.1 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:01.2 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x202001004000 00:05:05.643 EAL: PCI memory mapped at 0x202001005000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.2 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:01.3 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x202001006000 00:05:05.643 EAL: PCI memory mapped at 0x202001007000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.3 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:01.4 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x202001008000 00:05:05.643 EAL: PCI memory mapped at 0x202001009000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.4 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:01.5 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x20200100a000 00:05:05.643 EAL: PCI memory mapped at 0x20200100b000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.5 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:01.6 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x20200100c000 00:05:05.643 EAL: PCI memory mapped at 0x20200100d000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.6 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:01.7 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x20200100e000 00:05:05.643 EAL: PCI memory mapped at 0x20200100f000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.7 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:02.0 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x202001010000 00:05:05.643 EAL: PCI memory mapped at 0x202001011000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.0 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:02.1 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x202001012000 00:05:05.643 EAL: PCI memory mapped at 0x202001013000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.1 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:02.2 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x202001014000 00:05:05.643 EAL: PCI memory mapped at 0x202001015000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.2 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:02.3 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x202001016000 00:05:05.643 EAL: PCI memory mapped at 0x202001017000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.3 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:02.4 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x202001018000 00:05:05.643 EAL: PCI memory mapped at 0x202001019000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.4 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:02.5 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x20200101a000 00:05:05.643 EAL: PCI memory mapped at 0x20200101b000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.5 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:02.6 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x20200101c000 00:05:05.643 EAL: PCI memory mapped at 0x20200101d000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.6 (socket 1) 00:05:05.643 EAL: PCI device 0000:84:02.7 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x20200101e000 00:05:05.643 EAL: PCI memory mapped at 0x20200101f000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.7 (socket 1) 00:05:05.643 EAL: PCI device 0000:85:01.0 on NUMA socket 1 00:05:05.643 EAL: probe driver: 8086:37c9 qat 00:05:05.643 EAL: PCI memory mapped at 0x202001020000 00:05:05.643 EAL: PCI memory mapped at 0x202001021000 00:05:05.643 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.0 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:01.1 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001022000 00:05:05.644 EAL: PCI memory mapped at 0x202001023000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.1 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:01.2 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001024000 00:05:05.644 EAL: PCI memory mapped at 0x202001025000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.2 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:01.3 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001026000 00:05:05.644 EAL: PCI memory mapped at 0x202001027000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.3 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:01.4 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001028000 00:05:05.644 EAL: PCI memory mapped at 0x202001029000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.4 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:01.5 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x20200102a000 00:05:05.644 EAL: PCI memory mapped at 0x20200102b000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.5 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:01.6 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x20200102c000 00:05:05.644 EAL: PCI memory mapped at 0x20200102d000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.6 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:01.7 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x20200102e000 00:05:05.644 EAL: PCI memory mapped at 0x20200102f000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.7 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:02.0 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001030000 00:05:05.644 EAL: PCI memory mapped at 0x202001031000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.0 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:02.1 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001032000 00:05:05.644 EAL: PCI memory mapped at 0x202001033000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.1 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:02.2 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001034000 00:05:05.644 EAL: PCI memory mapped at 0x202001035000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.2 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:02.3 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001036000 00:05:05.644 EAL: PCI memory mapped at 0x202001037000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.3 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:02.4 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001038000 00:05:05.644 EAL: PCI memory mapped at 0x202001039000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.4 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:02.5 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x20200103a000 00:05:05.644 EAL: PCI memory mapped at 0x20200103b000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.5 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:02.6 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x20200103c000 00:05:05.644 EAL: PCI memory mapped at 0x20200103d000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.6 (socket 1) 00:05:05.644 EAL: PCI device 0000:85:02.7 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x20200103e000 00:05:05.644 EAL: PCI memory mapped at 0x20200103f000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.7 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:01.0 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001040000 00:05:05.644 EAL: PCI memory mapped at 0x202001041000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.0 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:01.1 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001042000 00:05:05.644 EAL: PCI memory mapped at 0x202001043000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.1 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:01.2 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001044000 00:05:05.644 EAL: PCI memory mapped at 0x202001045000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.2 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:01.3 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001046000 00:05:05.644 EAL: PCI memory mapped at 0x202001047000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.3 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:01.4 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001048000 00:05:05.644 EAL: PCI memory mapped at 0x202001049000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.4 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:01.5 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x20200104a000 00:05:05.644 EAL: PCI memory mapped at 0x20200104b000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.5 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:01.6 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x20200104c000 00:05:05.644 EAL: PCI memory mapped at 0x20200104d000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.6 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:01.7 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x20200104e000 00:05:05.644 EAL: PCI memory mapped at 0x20200104f000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.7 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:02.0 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001050000 00:05:05.644 EAL: PCI memory mapped at 0x202001051000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.0 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:02.1 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001052000 00:05:05.644 EAL: PCI memory mapped at 0x202001053000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.1 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:02.2 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001054000 00:05:05.644 EAL: PCI memory mapped at 0x202001055000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.2 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:02.3 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001056000 00:05:05.644 EAL: PCI memory mapped at 0x202001057000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.3 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:02.4 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x202001058000 00:05:05.644 EAL: PCI memory mapped at 0x202001059000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.4 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:02.5 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x20200105a000 00:05:05.644 EAL: PCI memory mapped at 0x20200105b000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.5 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:02.6 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x20200105c000 00:05:05.644 EAL: PCI memory mapped at 0x20200105d000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.6 (socket 1) 00:05:05.644 EAL: PCI device 0000:86:02.7 on NUMA socket 1 00:05:05.644 EAL: probe driver: 8086:37c9 qat 00:05:05.644 EAL: PCI memory mapped at 0x20200105e000 00:05:05.644 EAL: PCI memory mapped at 0x20200105f000 00:05:05.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.7 (socket 1) 00:05:05.644 EAL: No shared files mode enabled, IPC is disabled 00:05:05.644 EAL: No shared files mode enabled, IPC is disabled 00:05:05.644 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:05.644 EAL: Mem event callback 'spdk:(nil)' registered 00:05:05.644 00:05:05.644 00:05:05.644 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.644 http://cunit.sourceforge.net/ 00:05:05.644 00:05:05.644 00:05:05.644 Suite: components_suite 00:05:05.644 Test: vtophys_malloc_test ...passed 00:05:05.644 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:05.644 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.644 EAL: Restoring previous memory policy: 4 00:05:05.644 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.644 EAL: request: mp_malloc_sync 00:05:05.644 EAL: No shared files mode enabled, IPC is disabled 00:05:05.644 EAL: Heap on socket 0 was expanded by 4MB 00:05:05.644 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.644 EAL: request: mp_malloc_sync 00:05:05.644 EAL: No shared files mode enabled, IPC is disabled 00:05:05.644 EAL: Heap on socket 0 was shrunk by 4MB 00:05:05.644 EAL: Trying to obtain current memory policy. 00:05:05.644 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.644 EAL: Restoring previous memory policy: 4 00:05:05.644 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.644 EAL: request: mp_malloc_sync 00:05:05.644 EAL: No shared files mode enabled, IPC is disabled 00:05:05.644 EAL: Heap on socket 0 was expanded by 6MB 00:05:05.644 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.645 EAL: request: mp_malloc_sync 00:05:05.645 EAL: No shared files mode enabled, IPC is disabled 00:05:05.645 EAL: Heap on socket 0 was shrunk by 6MB 00:05:05.645 EAL: Trying to obtain current memory policy. 00:05:05.645 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.645 EAL: Restoring previous memory policy: 4 00:05:05.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.645 EAL: request: mp_malloc_sync 00:05:05.645 EAL: No shared files mode enabled, IPC is disabled 00:05:05.645 EAL: Heap on socket 0 was expanded by 10MB 00:05:05.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.645 EAL: request: mp_malloc_sync 00:05:05.645 EAL: No shared files mode enabled, IPC is disabled 00:05:05.645 EAL: Heap on socket 0 was shrunk by 10MB 00:05:05.645 EAL: Trying to obtain current memory policy. 00:05:05.645 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.645 EAL: Restoring previous memory policy: 4 00:05:05.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.645 EAL: request: mp_malloc_sync 00:05:05.645 EAL: No shared files mode enabled, IPC is disabled 00:05:05.645 EAL: Heap on socket 0 was expanded by 18MB 00:05:05.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.645 EAL: request: mp_malloc_sync 00:05:05.645 EAL: No shared files mode enabled, IPC is disabled 00:05:05.645 EAL: Heap on socket 0 was shrunk by 18MB 00:05:05.645 EAL: Trying to obtain current memory policy. 00:05:05.645 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.645 EAL: Restoring previous memory policy: 4 00:05:05.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.645 EAL: request: mp_malloc_sync 00:05:05.645 EAL: No shared files mode enabled, IPC is disabled 00:05:05.645 EAL: Heap on socket 0 was expanded by 34MB 00:05:05.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.645 EAL: request: mp_malloc_sync 00:05:05.645 EAL: No shared files mode enabled, IPC is disabled 00:05:05.645 EAL: Heap on socket 0 was shrunk by 34MB 00:05:05.645 EAL: Trying to obtain current memory policy. 00:05:05.645 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.645 EAL: Restoring previous memory policy: 4 00:05:05.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.645 EAL: request: mp_malloc_sync 00:05:05.645 EAL: No shared files mode enabled, IPC is disabled 00:05:05.645 EAL: Heap on socket 0 was expanded by 66MB 00:05:05.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.645 EAL: request: mp_malloc_sync 00:05:05.645 EAL: No shared files mode enabled, IPC is disabled 00:05:05.645 EAL: Heap on socket 0 was shrunk by 66MB 00:05:05.645 EAL: Trying to obtain current memory policy. 00:05:05.645 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.645 EAL: Restoring previous memory policy: 4 00:05:05.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.645 EAL: request: mp_malloc_sync 00:05:05.645 EAL: No shared files mode enabled, IPC is disabled 00:05:05.645 EAL: Heap on socket 0 was expanded by 130MB 00:05:05.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.903 EAL: request: mp_malloc_sync 00:05:05.903 EAL: No shared files mode enabled, IPC is disabled 00:05:05.903 EAL: Heap on socket 0 was shrunk by 130MB 00:05:05.903 EAL: Trying to obtain current memory policy. 00:05:05.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.903 EAL: Restoring previous memory policy: 4 00:05:05.903 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.903 EAL: request: mp_malloc_sync 00:05:05.903 EAL: No shared files mode enabled, IPC is disabled 00:05:05.903 EAL: Heap on socket 0 was expanded by 258MB 00:05:05.903 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.903 EAL: request: mp_malloc_sync 00:05:05.903 EAL: No shared files mode enabled, IPC is disabled 00:05:05.903 EAL: Heap on socket 0 was shrunk by 258MB 00:05:05.903 EAL: Trying to obtain current memory policy. 00:05:05.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.161 EAL: Restoring previous memory policy: 4 00:05:06.161 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.161 EAL: request: mp_malloc_sync 00:05:06.161 EAL: No shared files mode enabled, IPC is disabled 00:05:06.161 EAL: Heap on socket 0 was expanded by 514MB 00:05:06.161 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.419 EAL: request: mp_malloc_sync 00:05:06.419 EAL: No shared files mode enabled, IPC is disabled 00:05:06.419 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.419 EAL: Trying to obtain current memory policy. 00:05:06.419 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.677 EAL: Restoring previous memory policy: 4 00:05:06.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.677 EAL: request: mp_malloc_sync 00:05:06.677 EAL: No shared files mode enabled, IPC is disabled 00:05:06.677 EAL: Heap on socket 0 was expanded by 1026MB 00:05:06.935 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.194 EAL: request: mp_malloc_sync 00:05:07.194 EAL: No shared files mode enabled, IPC is disabled 00:05:07.194 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:07.194 passed 00:05:07.194 00:05:07.194 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.194 suites 1 1 n/a 0 0 00:05:07.194 tests 2 2 2 0 0 00:05:07.194 asserts 6541 6541 6541 0 n/a 00:05:07.194 00:05:07.194 Elapsed time = 1.460 seconds 00:05:07.194 EAL: No shared files mode enabled, IPC is disabled 00:05:07.194 EAL: No shared files mode enabled, IPC is disabled 00:05:07.194 EAL: No shared files mode enabled, IPC is disabled 00:05:07.194 00:05:07.194 real 0m1.618s 00:05:07.194 user 0m0.935s 00:05:07.194 sys 0m0.642s 00:05:07.194 10:20:10 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.194 10:20:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:07.194 ************************************ 00:05:07.194 END TEST env_vtophys 00:05:07.194 ************************************ 00:05:07.194 10:20:10 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:07.194 10:20:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.194 10:20:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.194 10:20:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.194 ************************************ 00:05:07.194 START TEST env_pci 00:05:07.194 ************************************ 00:05:07.194 10:20:10 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:07.194 00:05:07.194 00:05:07.194 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.194 http://cunit.sourceforge.net/ 00:05:07.194 00:05:07.194 00:05:07.194 Suite: pci 00:05:07.194 Test: pci_hook ...[2024-07-25 10:20:10.767245] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2295773 has claimed it 00:05:07.194 EAL: Cannot find device (10000:00:01.0) 00:05:07.194 EAL: Failed to attach device on primary process 00:05:07.194 passed 00:05:07.194 00:05:07.194 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.194 suites 1 1 n/a 0 0 00:05:07.194 tests 1 1 1 0 0 00:05:07.194 asserts 25 25 25 0 n/a 00:05:07.194 00:05:07.194 Elapsed time = 0.025 seconds 00:05:07.194 00:05:07.194 real 0m0.040s 00:05:07.194 user 0m0.009s 00:05:07.194 sys 0m0.031s 00:05:07.194 10:20:10 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.194 10:20:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:07.194 ************************************ 00:05:07.194 END TEST env_pci 00:05:07.194 ************************************ 00:05:07.194 10:20:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:07.194 10:20:10 env -- env/env.sh@15 -- # uname 00:05:07.194 10:20:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:07.194 10:20:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:07.194 10:20:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.194 10:20:10 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:07.194 10:20:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.194 10:20:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.194 ************************************ 00:05:07.194 START TEST env_dpdk_post_init 00:05:07.194 ************************************ 00:05:07.194 10:20:10 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.194 EAL: Detected CPU lcores: 48 00:05:07.194 EAL: Detected NUMA nodes: 2 00:05:07.194 EAL: Detected shared linkage of DPDK 00:05:07.194 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.194 EAL: Selected IOVA mode 'PA' 00:05:07.194 EAL: VFIO support initialized 00:05:07.194 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.0 (socket 1) 00:05:07.194 CRYPTODEV: Creating cryptodev 0000:84:01.0_qat_asym 00:05:07.194 CRYPTODEV: Initialisation parameters - name: 0000:84:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.194 CRYPTODEV: Creating cryptodev 0000:84:01.0_qat_sym 00:05:07.194 CRYPTODEV: Initialisation parameters - name: 0000:84:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.194 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.1 (socket 1) 00:05:07.194 CRYPTODEV: Creating cryptodev 0000:84:01.1_qat_asym 00:05:07.194 CRYPTODEV: Initialisation parameters - name: 0000:84:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:01.1_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.2 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:01.2_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:01.2_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.3 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:01.3_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:01.3_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.4 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:01.4_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:01.4_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.5 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:01.5_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:01.5_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.6 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:01.6_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:01.6_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.7 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:01.7_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:01.7_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.0 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.0_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.0_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.1 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.1_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.1_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.2 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.2_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.2_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.3 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.3_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.3_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.4 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.4_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.4_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.5 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.5_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.5_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.6 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.6_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.6_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.7 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.7_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:84:02.7_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:84:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.0 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.0_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.0_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.1 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.1_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.1_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.2 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.2_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.2_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.3 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.3_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.3_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.4 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.4_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.4_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.5 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.5_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.5_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.6 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.6_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.6_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.7 (socket 1) 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.7_qat_asym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.195 CRYPTODEV: Creating cryptodev 0000:85:01.7_qat_sym 00:05:07.195 CRYPTODEV: Initialisation parameters - name: 0000:85:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.195 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.0 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.0_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.0_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.1 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.1_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.1_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.2 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.2_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.2_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.3 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.3_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.3_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.4 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.4_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.4_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.5 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.5_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.5_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.6 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.6_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.6_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.7 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.7_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:85:02.7_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:85:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.0 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.0_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.0_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.1 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.1_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.1_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.2 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.2_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.2_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.3 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.3_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.3_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.4 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.4_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.4_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.5 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.5_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.5_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.6 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.6_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.6_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.7 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.7_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:01.7_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.0 (socket 1) 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:02.0_qat_asym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.454 CRYPTODEV: Creating cryptodev 0000:86:02.0_qat_sym 00:05:07.454 CRYPTODEV: Initialisation parameters - name: 0000:86:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.454 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.1 (socket 1) 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.1_qat_asym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.1_qat_sym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.455 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.2 (socket 1) 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.2_qat_asym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.2_qat_sym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.455 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.3 (socket 1) 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.3_qat_asym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.3_qat_sym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.455 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.4 (socket 1) 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.4_qat_asym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.4_qat_sym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.455 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.5 (socket 1) 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.5_qat_asym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.5_qat_sym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.455 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.6 (socket 1) 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.6_qat_asym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.6_qat_sym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.455 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.7 (socket 1) 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.7_qat_asym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:07.455 CRYPTODEV: Creating cryptodev 0000:86:02.7_qat_sym 00:05:07.455 CRYPTODEV: Initialisation parameters - name: 0000:86:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:07.455 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.455 EAL: Using IOMMU type 1 (Type 1) 00:05:07.455 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:07.455 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:07.455 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:07.455 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:07.455 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:07.455 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:07.455 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:07.455 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:08.392 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:05:08.392 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:08.392 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:08.392 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:08.392 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:08.392 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:08.392 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:08.392 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:08.392 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:11.684 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:05:11.684 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001080000 00:05:11.684 Starting DPDK initialization... 00:05:11.684 Starting SPDK post initialization... 00:05:11.684 SPDK NVMe probe 00:05:11.684 Attaching to 0000:0b:00.0 00:05:11.684 Attached to 0000:0b:00.0 00:05:11.684 Cleaning up... 00:05:11.684 00:05:11.684 real 0m4.411s 00:05:11.684 user 0m3.254s 00:05:11.684 sys 0m0.217s 00:05:11.684 10:20:15 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.684 10:20:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.684 ************************************ 00:05:11.684 END TEST env_dpdk_post_init 00:05:11.684 ************************************ 00:05:11.684 10:20:15 env -- env/env.sh@26 -- # uname 00:05:11.684 10:20:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:11.684 10:20:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.684 10:20:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.684 10:20:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.684 10:20:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.684 ************************************ 00:05:11.684 START TEST env_mem_callbacks 00:05:11.684 ************************************ 00:05:11.684 10:20:15 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.684 EAL: Detected CPU lcores: 48 00:05:11.684 EAL: Detected NUMA nodes: 2 00:05:11.684 EAL: Detected shared linkage of DPDK 00:05:11.685 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.685 EAL: Selected IOVA mode 'PA' 00:05:11.685 EAL: VFIO support initialized 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.0 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.0_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.0_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.1 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.1_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.1_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.2 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.2_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.2_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.3 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.3_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.3_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.4 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.4_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.4_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.5 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.5_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.5_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.6 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.6_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.6_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:01.7 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.7_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:01.7_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.0 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.0_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.0_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.1 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.1_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.1_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.2 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.2_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.2_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.3 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.3_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.3_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.4 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.4_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.4_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.5 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.5_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.5_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.6 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.6_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.6_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:84:02.7 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.7_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:84:02.7_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:84:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.0 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.0_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.0_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.1 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.1_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.1_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.2 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.2_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.2_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.3 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.3_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.3_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.4 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.4_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.4_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.5 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.5_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.5_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.6 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.6_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.6_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:01.7 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.7_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:01.7_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.0 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:02.0_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:02.0_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.1 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:02.1_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:02.1_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.2 (socket 1) 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:02.2_qat_asym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.685 CRYPTODEV: Creating cryptodev 0000:85:02.2_qat_sym 00:05:11.685 CRYPTODEV: Initialisation parameters - name: 0000:85:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.3 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:85:02.3_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:85:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:85:02.3_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:85:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.4 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:85:02.4_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:85:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:85:02.4_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:85:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.5 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:85:02.5_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:85:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:85:02.5_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:85:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.6 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:85:02.6_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:85:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:85:02.6_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:85:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:85:02.7 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:85:02.7_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:85:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:85:02.7_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:85:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.0 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.0_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.0_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.1 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.1_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.1_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.2 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.2_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.2_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.3 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.3_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.3_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.4 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.4_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.4_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.5 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.5_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.5_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.6 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.6_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.6_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:01.7 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.7_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:01.7_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.0 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.0_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.0_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.1 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.1_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.1_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.2 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.2_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.2_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.3 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.3_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.3_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.4 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.4_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.4_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.5 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.5_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.5_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.6 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.6_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.6_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:86:02.7 (socket 1) 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.7_qat_asym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:11.686 CRYPTODEV: Creating cryptodev 0000:86:02.7_qat_sym 00:05:11.686 CRYPTODEV: Initialisation parameters - name: 0000:86:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:11.686 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.686 00:05:11.686 00:05:11.686 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.686 http://cunit.sourceforge.net/ 00:05:11.686 00:05:11.686 00:05:11.686 Suite: memory 00:05:11.686 Test: test ... 00:05:11.686 register 0x200000200000 2097152 00:05:11.686 register 0x201000a00000 2097152 00:05:11.686 malloc 3145728 00:05:11.686 register 0x200000400000 4194304 00:05:11.686 buf 0x200000500000 len 3145728 PASSED 00:05:11.686 malloc 64 00:05:11.686 buf 0x2000004fff40 len 64 PASSED 00:05:11.686 malloc 4194304 00:05:11.686 register 0x200000800000 6291456 00:05:11.686 buf 0x200000a00000 len 4194304 PASSED 00:05:11.686 free 0x200000500000 3145728 00:05:11.686 free 0x2000004fff40 64 00:05:11.686 unregister 0x200000400000 4194304 PASSED 00:05:11.686 free 0x200000a00000 4194304 00:05:11.686 unregister 0x200000800000 6291456 PASSED 00:05:11.686 malloc 8388608 00:05:11.686 register 0x200000400000 10485760 00:05:11.686 buf 0x200000600000 len 8388608 PASSED 00:05:11.686 free 0x200000600000 8388608 00:05:11.686 unregister 0x200000400000 10485760 PASSED 00:05:11.686 passed 00:05:11.686 00:05:11.686 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.686 suites 1 1 n/a 0 0 00:05:11.686 tests 1 1 1 0 0 00:05:11.686 asserts 16 16 16 0 n/a 00:05:11.686 00:05:11.686 Elapsed time = 0.006 seconds 00:05:11.686 00:05:11.686 real 0m0.069s 00:05:11.686 user 0m0.019s 00:05:11.686 sys 0m0.050s 00:05:11.686 10:20:15 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.686 10:20:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:11.686 ************************************ 00:05:11.686 END TEST env_mem_callbacks 00:05:11.686 ************************************ 00:05:11.945 00:05:11.945 real 0m6.576s 00:05:11.945 user 0m4.477s 00:05:11.945 sys 0m1.137s 00:05:11.945 10:20:15 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.945 10:20:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.945 ************************************ 00:05:11.945 END TEST env 00:05:11.945 ************************************ 00:05:11.945 10:20:15 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.945 10:20:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.945 10:20:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.945 10:20:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.945 ************************************ 00:05:11.945 START TEST rpc 00:05:11.945 ************************************ 00:05:11.945 10:20:15 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.945 * Looking for test storage... 00:05:11.945 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:11.945 10:20:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2296433 00:05:11.945 10:20:15 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:11.945 10:20:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.945 10:20:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2296433 00:05:11.945 10:20:15 rpc -- common/autotest_common.sh@831 -- # '[' -z 2296433 ']' 00:05:11.945 10:20:15 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.945 10:20:15 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.945 10:20:15 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.945 10:20:15 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.945 10:20:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.945 [2024-07-25 10:20:15.545977] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:11.945 [2024-07-25 10:20:15.546064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2296433 ] 00:05:11.945 [2024-07-25 10:20:15.629556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.204 [2024-07-25 10:20:15.746354] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:12.204 [2024-07-25 10:20:15.746440] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2296433' to capture a snapshot of events at runtime. 00:05:12.204 [2024-07-25 10:20:15.746457] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:12.204 [2024-07-25 10:20:15.746479] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:12.204 [2024-07-25 10:20:15.746490] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2296433 for offline analysis/debug. 00:05:12.204 [2024-07-25 10:20:15.746522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.768 10:20:16 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.768 10:20:16 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:13.026 10:20:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:13.026 10:20:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:13.026 10:20:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:13.026 10:20:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:13.026 10:20:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.026 10:20:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.026 10:20:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.026 ************************************ 00:05:13.026 START TEST rpc_integrity 00:05:13.026 ************************************ 00:05:13.026 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:13.026 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.026 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.026 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.026 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.026 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.026 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.026 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.026 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.026 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.026 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.026 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.026 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:13.026 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.026 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.026 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.026 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.026 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.026 { 00:05:13.026 "name": "Malloc0", 00:05:13.026 "aliases": [ 00:05:13.026 "37557ed8-c999-4b0a-99f9-a4176bdecb6c" 00:05:13.026 ], 00:05:13.026 "product_name": "Malloc disk", 00:05:13.026 "block_size": 512, 00:05:13.026 "num_blocks": 16384, 00:05:13.026 "uuid": "37557ed8-c999-4b0a-99f9-a4176bdecb6c", 00:05:13.026 "assigned_rate_limits": { 00:05:13.026 "rw_ios_per_sec": 0, 00:05:13.026 "rw_mbytes_per_sec": 0, 00:05:13.026 "r_mbytes_per_sec": 0, 00:05:13.026 "w_mbytes_per_sec": 0 00:05:13.026 }, 00:05:13.026 "claimed": false, 00:05:13.026 "zoned": false, 00:05:13.026 "supported_io_types": { 00:05:13.026 "read": true, 00:05:13.026 "write": true, 00:05:13.026 "unmap": true, 00:05:13.026 "flush": true, 00:05:13.026 "reset": true, 00:05:13.026 "nvme_admin": false, 00:05:13.026 "nvme_io": false, 00:05:13.026 "nvme_io_md": false, 00:05:13.026 "write_zeroes": true, 00:05:13.026 "zcopy": true, 00:05:13.026 "get_zone_info": false, 00:05:13.026 "zone_management": false, 00:05:13.026 "zone_append": false, 00:05:13.026 "compare": false, 00:05:13.026 "compare_and_write": false, 00:05:13.026 "abort": true, 00:05:13.026 "seek_hole": false, 00:05:13.026 "seek_data": false, 00:05:13.026 "copy": true, 00:05:13.026 "nvme_iov_md": false 00:05:13.026 }, 00:05:13.026 "memory_domains": [ 00:05:13.026 { 00:05:13.026 "dma_device_id": "system", 00:05:13.026 "dma_device_type": 1 00:05:13.026 }, 00:05:13.026 { 00:05:13.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.026 "dma_device_type": 2 00:05:13.026 } 00:05:13.026 ], 00:05:13.026 "driver_specific": {} 00:05:13.026 } 00:05:13.026 ]' 00:05:13.026 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.026 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.026 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:13.026 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.026 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.026 [2024-07-25 10:20:16.616849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:13.026 [2024-07-25 10:20:16.616895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.026 [2024-07-25 10:20:16.616920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa19380 00:05:13.027 [2024-07-25 10:20:16.616936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.027 [2024-07-25 10:20:16.618493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.027 [2024-07-25 10:20:16.618521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.027 Passthru0 00:05:13.027 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.027 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.027 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.027 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.027 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.027 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.027 { 00:05:13.027 "name": "Malloc0", 00:05:13.027 "aliases": [ 00:05:13.027 "37557ed8-c999-4b0a-99f9-a4176bdecb6c" 00:05:13.027 ], 00:05:13.027 "product_name": "Malloc disk", 00:05:13.027 "block_size": 512, 00:05:13.027 "num_blocks": 16384, 00:05:13.027 "uuid": "37557ed8-c999-4b0a-99f9-a4176bdecb6c", 00:05:13.027 "assigned_rate_limits": { 00:05:13.027 "rw_ios_per_sec": 0, 00:05:13.027 "rw_mbytes_per_sec": 0, 00:05:13.027 "r_mbytes_per_sec": 0, 00:05:13.027 "w_mbytes_per_sec": 0 00:05:13.027 }, 00:05:13.027 "claimed": true, 00:05:13.027 "claim_type": "exclusive_write", 00:05:13.027 "zoned": false, 00:05:13.027 "supported_io_types": { 00:05:13.027 "read": true, 00:05:13.027 "write": true, 00:05:13.027 "unmap": true, 00:05:13.027 "flush": true, 00:05:13.027 "reset": true, 00:05:13.027 "nvme_admin": false, 00:05:13.027 "nvme_io": false, 00:05:13.027 "nvme_io_md": false, 00:05:13.027 "write_zeroes": true, 00:05:13.027 "zcopy": true, 00:05:13.027 "get_zone_info": false, 00:05:13.027 "zone_management": false, 00:05:13.027 "zone_append": false, 00:05:13.027 "compare": false, 00:05:13.027 "compare_and_write": false, 00:05:13.027 "abort": true, 00:05:13.027 "seek_hole": false, 00:05:13.027 "seek_data": false, 00:05:13.027 "copy": true, 00:05:13.027 "nvme_iov_md": false 00:05:13.027 }, 00:05:13.027 "memory_domains": [ 00:05:13.027 { 00:05:13.027 "dma_device_id": "system", 00:05:13.027 "dma_device_type": 1 00:05:13.027 }, 00:05:13.027 { 00:05:13.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.027 "dma_device_type": 2 00:05:13.027 } 00:05:13.027 ], 00:05:13.027 "driver_specific": {} 00:05:13.027 }, 00:05:13.027 { 00:05:13.027 "name": "Passthru0", 00:05:13.027 "aliases": [ 00:05:13.027 "9f1e88a2-338d-54e1-ad22-fca81e88cabe" 00:05:13.027 ], 00:05:13.027 "product_name": "passthru", 00:05:13.027 "block_size": 512, 00:05:13.027 "num_blocks": 16384, 00:05:13.027 "uuid": "9f1e88a2-338d-54e1-ad22-fca81e88cabe", 00:05:13.027 "assigned_rate_limits": { 00:05:13.027 "rw_ios_per_sec": 0, 00:05:13.027 "rw_mbytes_per_sec": 0, 00:05:13.027 "r_mbytes_per_sec": 0, 00:05:13.027 "w_mbytes_per_sec": 0 00:05:13.027 }, 00:05:13.027 "claimed": false, 00:05:13.027 "zoned": false, 00:05:13.027 "supported_io_types": { 00:05:13.027 "read": true, 00:05:13.027 "write": true, 00:05:13.027 "unmap": true, 00:05:13.027 "flush": true, 00:05:13.027 "reset": true, 00:05:13.027 "nvme_admin": false, 00:05:13.027 "nvme_io": false, 00:05:13.027 "nvme_io_md": false, 00:05:13.027 "write_zeroes": true, 00:05:13.027 "zcopy": true, 00:05:13.027 "get_zone_info": false, 00:05:13.027 "zone_management": false, 00:05:13.027 "zone_append": false, 00:05:13.027 "compare": false, 00:05:13.027 "compare_and_write": false, 00:05:13.027 "abort": true, 00:05:13.027 "seek_hole": false, 00:05:13.027 "seek_data": false, 00:05:13.027 "copy": true, 00:05:13.027 "nvme_iov_md": false 00:05:13.027 }, 00:05:13.027 "memory_domains": [ 00:05:13.027 { 00:05:13.027 "dma_device_id": "system", 00:05:13.027 "dma_device_type": 1 00:05:13.027 }, 00:05:13.027 { 00:05:13.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.027 "dma_device_type": 2 00:05:13.027 } 00:05:13.027 ], 00:05:13.027 "driver_specific": { 00:05:13.027 "passthru": { 00:05:13.027 "name": "Passthru0", 00:05:13.027 "base_bdev_name": "Malloc0" 00:05:13.027 } 00:05:13.027 } 00:05:13.027 } 00:05:13.027 ]' 00:05:13.027 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.027 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.027 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.027 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.027 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.027 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.027 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:13.027 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.027 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.027 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.027 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.027 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.027 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.027 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.027 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.027 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.285 10:20:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.285 00:05:13.285 real 0m0.234s 00:05:13.285 user 0m0.149s 00:05:13.285 sys 0m0.024s 00:05:13.285 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.285 10:20:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.285 ************************************ 00:05:13.285 END TEST rpc_integrity 00:05:13.285 ************************************ 00:05:13.285 10:20:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:13.285 10:20:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.285 10:20:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.285 10:20:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.285 ************************************ 00:05:13.285 START TEST rpc_plugins 00:05:13.285 ************************************ 00:05:13.285 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:13.285 10:20:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:13.285 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.285 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.285 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.285 10:20:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:13.285 10:20:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:13.285 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.285 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.285 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.285 10:20:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:13.285 { 00:05:13.285 "name": "Malloc1", 00:05:13.285 "aliases": [ 00:05:13.285 "a4ad7543-094c-4cce-ba53-aae792ee99d6" 00:05:13.285 ], 00:05:13.285 "product_name": "Malloc disk", 00:05:13.285 "block_size": 4096, 00:05:13.285 "num_blocks": 256, 00:05:13.285 "uuid": "a4ad7543-094c-4cce-ba53-aae792ee99d6", 00:05:13.285 "assigned_rate_limits": { 00:05:13.285 "rw_ios_per_sec": 0, 00:05:13.285 "rw_mbytes_per_sec": 0, 00:05:13.285 "r_mbytes_per_sec": 0, 00:05:13.285 "w_mbytes_per_sec": 0 00:05:13.285 }, 00:05:13.285 "claimed": false, 00:05:13.285 "zoned": false, 00:05:13.285 "supported_io_types": { 00:05:13.285 "read": true, 00:05:13.285 "write": true, 00:05:13.285 "unmap": true, 00:05:13.285 "flush": true, 00:05:13.285 "reset": true, 00:05:13.285 "nvme_admin": false, 00:05:13.285 "nvme_io": false, 00:05:13.285 "nvme_io_md": false, 00:05:13.285 "write_zeroes": true, 00:05:13.285 "zcopy": true, 00:05:13.285 "get_zone_info": false, 00:05:13.285 "zone_management": false, 00:05:13.285 "zone_append": false, 00:05:13.285 "compare": false, 00:05:13.286 "compare_and_write": false, 00:05:13.286 "abort": true, 00:05:13.286 "seek_hole": false, 00:05:13.286 "seek_data": false, 00:05:13.286 "copy": true, 00:05:13.286 "nvme_iov_md": false 00:05:13.286 }, 00:05:13.286 "memory_domains": [ 00:05:13.286 { 00:05:13.286 "dma_device_id": "system", 00:05:13.286 "dma_device_type": 1 00:05:13.286 }, 00:05:13.286 { 00:05:13.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.286 "dma_device_type": 2 00:05:13.286 } 00:05:13.286 ], 00:05:13.286 "driver_specific": {} 00:05:13.286 } 00:05:13.286 ]' 00:05:13.286 10:20:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:13.286 10:20:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:13.286 10:20:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:13.286 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.286 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.286 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.286 10:20:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:13.286 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.286 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.286 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.286 10:20:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:13.286 10:20:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:13.286 10:20:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:13.286 00:05:13.286 real 0m0.115s 00:05:13.286 user 0m0.068s 00:05:13.286 sys 0m0.016s 00:05:13.286 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.286 10:20:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.286 ************************************ 00:05:13.286 END TEST rpc_plugins 00:05:13.286 ************************************ 00:05:13.286 10:20:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.286 10:20:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.286 10:20:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.286 10:20:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.286 ************************************ 00:05:13.286 START TEST rpc_trace_cmd_test 00:05:13.286 ************************************ 00:05:13.286 10:20:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:13.286 10:20:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:13.286 10:20:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:13.286 10:20:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.286 10:20:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.286 10:20:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.286 10:20:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:13.286 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2296433", 00:05:13.286 "tpoint_group_mask": "0x8", 00:05:13.286 "iscsi_conn": { 00:05:13.286 "mask": "0x2", 00:05:13.286 "tpoint_mask": "0x0" 00:05:13.286 }, 00:05:13.286 "scsi": { 00:05:13.286 "mask": "0x4", 00:05:13.286 "tpoint_mask": "0x0" 00:05:13.286 }, 00:05:13.286 "bdev": { 00:05:13.286 "mask": "0x8", 00:05:13.286 "tpoint_mask": "0xffffffffffffffff" 00:05:13.286 }, 00:05:13.286 "nvmf_rdma": { 00:05:13.286 "mask": "0x10", 00:05:13.286 "tpoint_mask": "0x0" 00:05:13.286 }, 00:05:13.286 "nvmf_tcp": { 00:05:13.286 "mask": "0x20", 00:05:13.286 "tpoint_mask": "0x0" 00:05:13.286 }, 00:05:13.286 "ftl": { 00:05:13.286 "mask": "0x40", 00:05:13.286 "tpoint_mask": "0x0" 00:05:13.286 }, 00:05:13.286 "blobfs": { 00:05:13.286 "mask": "0x80", 00:05:13.286 "tpoint_mask": "0x0" 00:05:13.286 }, 00:05:13.286 "dsa": { 00:05:13.286 "mask": "0x200", 00:05:13.286 "tpoint_mask": "0x0" 00:05:13.286 }, 00:05:13.286 "thread": { 00:05:13.286 "mask": "0x400", 00:05:13.286 "tpoint_mask": "0x0" 00:05:13.286 }, 00:05:13.286 "nvme_pcie": { 00:05:13.286 "mask": "0x800", 00:05:13.286 "tpoint_mask": "0x0" 00:05:13.286 }, 00:05:13.286 "iaa": { 00:05:13.286 "mask": "0x1000", 00:05:13.286 "tpoint_mask": "0x0" 00:05:13.286 }, 00:05:13.286 "nvme_tcp": { 00:05:13.286 "mask": "0x2000", 00:05:13.286 "tpoint_mask": "0x0" 00:05:13.286 }, 00:05:13.286 "bdev_nvme": { 00:05:13.286 "mask": "0x4000", 00:05:13.286 "tpoint_mask": "0x0" 00:05:13.286 }, 00:05:13.286 "sock": { 00:05:13.286 "mask": "0x8000", 00:05:13.286 "tpoint_mask": "0x0" 00:05:13.286 } 00:05:13.286 }' 00:05:13.286 10:20:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:13.544 10:20:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:13.544 10:20:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:13.544 10:20:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:13.544 10:20:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:13.544 10:20:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:13.544 10:20:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:13.544 10:20:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:13.544 10:20:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:13.544 10:20:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:13.544 00:05:13.544 real 0m0.199s 00:05:13.544 user 0m0.173s 00:05:13.544 sys 0m0.021s 00:05:13.544 10:20:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.544 10:20:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.544 ************************************ 00:05:13.544 END TEST rpc_trace_cmd_test 00:05:13.544 ************************************ 00:05:13.544 10:20:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:13.544 10:20:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:13.544 10:20:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:13.544 10:20:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.544 10:20:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.544 10:20:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.544 ************************************ 00:05:13.544 START TEST rpc_daemon_integrity 00:05:13.544 ************************************ 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.544 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.803 { 00:05:13.803 "name": "Malloc2", 00:05:13.803 "aliases": [ 00:05:13.803 "386f9b49-f607-4884-a612-3dcdc9b0ca57" 00:05:13.803 ], 00:05:13.803 "product_name": "Malloc disk", 00:05:13.803 "block_size": 512, 00:05:13.803 "num_blocks": 16384, 00:05:13.803 "uuid": "386f9b49-f607-4884-a612-3dcdc9b0ca57", 00:05:13.803 "assigned_rate_limits": { 00:05:13.803 "rw_ios_per_sec": 0, 00:05:13.803 "rw_mbytes_per_sec": 0, 00:05:13.803 "r_mbytes_per_sec": 0, 00:05:13.803 "w_mbytes_per_sec": 0 00:05:13.803 }, 00:05:13.803 "claimed": false, 00:05:13.803 "zoned": false, 00:05:13.803 "supported_io_types": { 00:05:13.803 "read": true, 00:05:13.803 "write": true, 00:05:13.803 "unmap": true, 00:05:13.803 "flush": true, 00:05:13.803 "reset": true, 00:05:13.803 "nvme_admin": false, 00:05:13.803 "nvme_io": false, 00:05:13.803 "nvme_io_md": false, 00:05:13.803 "write_zeroes": true, 00:05:13.803 "zcopy": true, 00:05:13.803 "get_zone_info": false, 00:05:13.803 "zone_management": false, 00:05:13.803 "zone_append": false, 00:05:13.803 "compare": false, 00:05:13.803 "compare_and_write": false, 00:05:13.803 "abort": true, 00:05:13.803 "seek_hole": false, 00:05:13.803 "seek_data": false, 00:05:13.803 "copy": true, 00:05:13.803 "nvme_iov_md": false 00:05:13.803 }, 00:05:13.803 "memory_domains": [ 00:05:13.803 { 00:05:13.803 "dma_device_id": "system", 00:05:13.803 "dma_device_type": 1 00:05:13.803 }, 00:05:13.803 { 00:05:13.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.803 "dma_device_type": 2 00:05:13.803 } 00:05:13.803 ], 00:05:13.803 "driver_specific": {} 00:05:13.803 } 00:05:13.803 ]' 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.803 [2024-07-25 10:20:17.302812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:13.803 [2024-07-25 10:20:17.302859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.803 [2024-07-25 10:20:17.302887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbbcf80 00:05:13.803 [2024-07-25 10:20:17.302903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.803 [2024-07-25 10:20:17.304301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.803 [2024-07-25 10:20:17.304328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.803 Passthru0 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.803 { 00:05:13.803 "name": "Malloc2", 00:05:13.803 "aliases": [ 00:05:13.803 "386f9b49-f607-4884-a612-3dcdc9b0ca57" 00:05:13.803 ], 00:05:13.803 "product_name": "Malloc disk", 00:05:13.803 "block_size": 512, 00:05:13.803 "num_blocks": 16384, 00:05:13.803 "uuid": "386f9b49-f607-4884-a612-3dcdc9b0ca57", 00:05:13.803 "assigned_rate_limits": { 00:05:13.803 "rw_ios_per_sec": 0, 00:05:13.803 "rw_mbytes_per_sec": 0, 00:05:13.803 "r_mbytes_per_sec": 0, 00:05:13.803 "w_mbytes_per_sec": 0 00:05:13.803 }, 00:05:13.803 "claimed": true, 00:05:13.803 "claim_type": "exclusive_write", 00:05:13.803 "zoned": false, 00:05:13.803 "supported_io_types": { 00:05:13.803 "read": true, 00:05:13.803 "write": true, 00:05:13.803 "unmap": true, 00:05:13.803 "flush": true, 00:05:13.803 "reset": true, 00:05:13.803 "nvme_admin": false, 00:05:13.803 "nvme_io": false, 00:05:13.803 "nvme_io_md": false, 00:05:13.803 "write_zeroes": true, 00:05:13.803 "zcopy": true, 00:05:13.803 "get_zone_info": false, 00:05:13.803 "zone_management": false, 00:05:13.803 "zone_append": false, 00:05:13.803 "compare": false, 00:05:13.803 "compare_and_write": false, 00:05:13.803 "abort": true, 00:05:13.803 "seek_hole": false, 00:05:13.803 "seek_data": false, 00:05:13.803 "copy": true, 00:05:13.803 "nvme_iov_md": false 00:05:13.803 }, 00:05:13.803 "memory_domains": [ 00:05:13.803 { 00:05:13.803 "dma_device_id": "system", 00:05:13.803 "dma_device_type": 1 00:05:13.803 }, 00:05:13.803 { 00:05:13.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.803 "dma_device_type": 2 00:05:13.803 } 00:05:13.803 ], 00:05:13.803 "driver_specific": {} 00:05:13.803 }, 00:05:13.803 { 00:05:13.803 "name": "Passthru0", 00:05:13.803 "aliases": [ 00:05:13.803 "389081aa-6579-5519-90cd-5b0c3b8a5ec8" 00:05:13.803 ], 00:05:13.803 "product_name": "passthru", 00:05:13.803 "block_size": 512, 00:05:13.803 "num_blocks": 16384, 00:05:13.803 "uuid": "389081aa-6579-5519-90cd-5b0c3b8a5ec8", 00:05:13.803 "assigned_rate_limits": { 00:05:13.803 "rw_ios_per_sec": 0, 00:05:13.803 "rw_mbytes_per_sec": 0, 00:05:13.803 "r_mbytes_per_sec": 0, 00:05:13.803 "w_mbytes_per_sec": 0 00:05:13.803 }, 00:05:13.803 "claimed": false, 00:05:13.803 "zoned": false, 00:05:13.803 "supported_io_types": { 00:05:13.803 "read": true, 00:05:13.803 "write": true, 00:05:13.803 "unmap": true, 00:05:13.803 "flush": true, 00:05:13.803 "reset": true, 00:05:13.803 "nvme_admin": false, 00:05:13.803 "nvme_io": false, 00:05:13.803 "nvme_io_md": false, 00:05:13.803 "write_zeroes": true, 00:05:13.803 "zcopy": true, 00:05:13.803 "get_zone_info": false, 00:05:13.803 "zone_management": false, 00:05:13.803 "zone_append": false, 00:05:13.803 "compare": false, 00:05:13.803 "compare_and_write": false, 00:05:13.803 "abort": true, 00:05:13.803 "seek_hole": false, 00:05:13.803 "seek_data": false, 00:05:13.803 "copy": true, 00:05:13.803 "nvme_iov_md": false 00:05:13.803 }, 00:05:13.803 "memory_domains": [ 00:05:13.803 { 00:05:13.803 "dma_device_id": "system", 00:05:13.803 "dma_device_type": 1 00:05:13.803 }, 00:05:13.803 { 00:05:13.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.803 "dma_device_type": 2 00:05:13.803 } 00:05:13.803 ], 00:05:13.803 "driver_specific": { 00:05:13.803 "passthru": { 00:05:13.803 "name": "Passthru0", 00:05:13.803 "base_bdev_name": "Malloc2" 00:05:13.803 } 00:05:13.803 } 00:05:13.803 } 00:05:13.803 ]' 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.803 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.804 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.804 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.804 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.804 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.804 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.804 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.804 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.804 10:20:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.804 00:05:13.804 real 0m0.229s 00:05:13.804 user 0m0.153s 00:05:13.804 sys 0m0.024s 00:05:13.804 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.804 10:20:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.804 ************************************ 00:05:13.804 END TEST rpc_daemon_integrity 00:05:13.804 ************************************ 00:05:13.804 10:20:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:13.804 10:20:17 rpc -- rpc/rpc.sh@84 -- # killprocess 2296433 00:05:13.804 10:20:17 rpc -- common/autotest_common.sh@950 -- # '[' -z 2296433 ']' 00:05:13.804 10:20:17 rpc -- common/autotest_common.sh@954 -- # kill -0 2296433 00:05:13.804 10:20:17 rpc -- common/autotest_common.sh@955 -- # uname 00:05:13.804 10:20:17 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.804 10:20:17 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2296433 00:05:13.804 10:20:17 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.804 10:20:17 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.804 10:20:17 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2296433' 00:05:13.804 killing process with pid 2296433 00:05:13.804 10:20:17 rpc -- common/autotest_common.sh@969 -- # kill 2296433 00:05:13.804 10:20:17 rpc -- common/autotest_common.sh@974 -- # wait 2296433 00:05:14.370 00:05:14.370 real 0m2.506s 00:05:14.370 user 0m3.127s 00:05:14.370 sys 0m0.696s 00:05:14.370 10:20:17 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.370 10:20:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.370 ************************************ 00:05:14.370 END TEST rpc 00:05:14.370 ************************************ 00:05:14.370 10:20:17 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:14.370 10:20:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.370 10:20:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.370 10:20:17 -- common/autotest_common.sh@10 -- # set +x 00:05:14.370 ************************************ 00:05:14.370 START TEST skip_rpc 00:05:14.370 ************************************ 00:05:14.370 10:20:17 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:14.370 * Looking for test storage... 00:05:14.370 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:14.370 10:20:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:14.370 10:20:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:14.370 10:20:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:14.370 10:20:18 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.370 10:20:18 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.370 10:20:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.370 ************************************ 00:05:14.370 START TEST skip_rpc 00:05:14.370 ************************************ 00:05:14.370 10:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:14.370 10:20:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2297001 00:05:14.370 10:20:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:14.370 10:20:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.370 10:20:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:14.629 [2024-07-25 10:20:18.126974] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:14.629 [2024-07-25 10:20:18.127042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297001 ] 00:05:14.629 [2024-07-25 10:20:18.205590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.629 [2024-07-25 10:20:18.326404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.892 10:20:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.892 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:19.892 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.892 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:19.892 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.892 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2297001 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2297001 ']' 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2297001 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2297001 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2297001' 00:05:19.893 killing process with pid 2297001 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2297001 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2297001 00:05:19.893 00:05:19.893 real 0m5.494s 00:05:19.893 user 0m5.143s 00:05:19.893 sys 0m0.346s 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.893 10:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.893 ************************************ 00:05:19.893 END TEST skip_rpc 00:05:19.893 ************************************ 00:05:19.893 10:20:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.893 10:20:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.893 10:20:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.893 10:20:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.150 ************************************ 00:05:20.150 START TEST skip_rpc_with_json 00:05:20.150 ************************************ 00:05:20.150 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:20.150 10:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:20.150 10:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2297686 00:05:20.150 10:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.150 10:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.150 10:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2297686 00:05:20.150 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2297686 ']' 00:05:20.150 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.150 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.150 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.150 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.150 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.150 [2024-07-25 10:20:23.669787] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:20.150 [2024-07-25 10:20:23.669862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297686 ] 00:05:20.150 [2024-07-25 10:20:23.745266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.150 [2024-07-25 10:20:23.858687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.084 [2024-07-25 10:20:24.608610] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:21.084 request: 00:05:21.084 { 00:05:21.084 "trtype": "tcp", 00:05:21.084 "method": "nvmf_get_transports", 00:05:21.084 "req_id": 1 00:05:21.084 } 00:05:21.084 Got JSON-RPC error response 00:05:21.084 response: 00:05:21.084 { 00:05:21.084 "code": -19, 00:05:21.084 "message": "No such device" 00:05:21.084 } 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.084 [2024-07-25 10:20:24.616729] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.084 10:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:21.084 { 00:05:21.084 "subsystems": [ 00:05:21.084 { 00:05:21.084 "subsystem": "keyring", 00:05:21.084 "config": [] 00:05:21.084 }, 00:05:21.084 { 00:05:21.084 "subsystem": "iobuf", 00:05:21.084 "config": [ 00:05:21.084 { 00:05:21.084 "method": "iobuf_set_options", 00:05:21.084 "params": { 00:05:21.084 "small_pool_count": 8192, 00:05:21.084 "large_pool_count": 1024, 00:05:21.084 "small_bufsize": 8192, 00:05:21.084 "large_bufsize": 135168 00:05:21.084 } 00:05:21.084 } 00:05:21.084 ] 00:05:21.084 }, 00:05:21.084 { 00:05:21.084 "subsystem": "sock", 00:05:21.084 "config": [ 00:05:21.084 { 00:05:21.084 "method": "sock_set_default_impl", 00:05:21.084 "params": { 00:05:21.084 "impl_name": "posix" 00:05:21.084 } 00:05:21.084 }, 00:05:21.084 { 00:05:21.084 "method": "sock_impl_set_options", 00:05:21.084 "params": { 00:05:21.084 "impl_name": "ssl", 00:05:21.084 "recv_buf_size": 4096, 00:05:21.084 "send_buf_size": 4096, 00:05:21.085 "enable_recv_pipe": true, 00:05:21.085 "enable_quickack": false, 00:05:21.085 "enable_placement_id": 0, 00:05:21.085 "enable_zerocopy_send_server": true, 00:05:21.085 "enable_zerocopy_send_client": false, 00:05:21.085 "zerocopy_threshold": 0, 00:05:21.085 "tls_version": 0, 00:05:21.085 "enable_ktls": false 00:05:21.085 } 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "method": "sock_impl_set_options", 00:05:21.085 "params": { 00:05:21.085 "impl_name": "posix", 00:05:21.085 "recv_buf_size": 2097152, 00:05:21.085 "send_buf_size": 2097152, 00:05:21.085 "enable_recv_pipe": true, 00:05:21.085 "enable_quickack": false, 00:05:21.085 "enable_placement_id": 0, 00:05:21.085 "enable_zerocopy_send_server": true, 00:05:21.085 "enable_zerocopy_send_client": false, 00:05:21.085 "zerocopy_threshold": 0, 00:05:21.085 "tls_version": 0, 00:05:21.085 "enable_ktls": false 00:05:21.085 } 00:05:21.085 } 00:05:21.085 ] 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "subsystem": "vmd", 00:05:21.085 "config": [] 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "subsystem": "accel", 00:05:21.085 "config": [ 00:05:21.085 { 00:05:21.085 "method": "accel_set_options", 00:05:21.085 "params": { 00:05:21.085 "small_cache_size": 128, 00:05:21.085 "large_cache_size": 16, 00:05:21.085 "task_count": 2048, 00:05:21.085 "sequence_count": 2048, 00:05:21.085 "buf_count": 2048 00:05:21.085 } 00:05:21.085 } 00:05:21.085 ] 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "subsystem": "bdev", 00:05:21.085 "config": [ 00:05:21.085 { 00:05:21.085 "method": "bdev_set_options", 00:05:21.085 "params": { 00:05:21.085 "bdev_io_pool_size": 65535, 00:05:21.085 "bdev_io_cache_size": 256, 00:05:21.085 "bdev_auto_examine": true, 00:05:21.085 "iobuf_small_cache_size": 128, 00:05:21.085 "iobuf_large_cache_size": 16 00:05:21.085 } 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "method": "bdev_raid_set_options", 00:05:21.085 "params": { 00:05:21.085 "process_window_size_kb": 1024, 00:05:21.085 "process_max_bandwidth_mb_sec": 0 00:05:21.085 } 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "method": "bdev_iscsi_set_options", 00:05:21.085 "params": { 00:05:21.085 "timeout_sec": 30 00:05:21.085 } 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "method": "bdev_nvme_set_options", 00:05:21.085 "params": { 00:05:21.085 "action_on_timeout": "none", 00:05:21.085 "timeout_us": 0, 00:05:21.085 "timeout_admin_us": 0, 00:05:21.085 "keep_alive_timeout_ms": 10000, 00:05:21.085 "arbitration_burst": 0, 00:05:21.085 "low_priority_weight": 0, 00:05:21.085 "medium_priority_weight": 0, 00:05:21.085 "high_priority_weight": 0, 00:05:21.085 "nvme_adminq_poll_period_us": 10000, 00:05:21.085 "nvme_ioq_poll_period_us": 0, 00:05:21.085 "io_queue_requests": 0, 00:05:21.085 "delay_cmd_submit": true, 00:05:21.085 "transport_retry_count": 4, 00:05:21.085 "bdev_retry_count": 3, 00:05:21.085 "transport_ack_timeout": 0, 00:05:21.085 "ctrlr_loss_timeout_sec": 0, 00:05:21.085 "reconnect_delay_sec": 0, 00:05:21.085 "fast_io_fail_timeout_sec": 0, 00:05:21.085 "disable_auto_failback": false, 00:05:21.085 "generate_uuids": false, 00:05:21.085 "transport_tos": 0, 00:05:21.085 "nvme_error_stat": false, 00:05:21.085 "rdma_srq_size": 0, 00:05:21.085 "io_path_stat": false, 00:05:21.085 "allow_accel_sequence": false, 00:05:21.085 "rdma_max_cq_size": 0, 00:05:21.085 "rdma_cm_event_timeout_ms": 0, 00:05:21.085 "dhchap_digests": [ 00:05:21.085 "sha256", 00:05:21.085 "sha384", 00:05:21.085 "sha512" 00:05:21.085 ], 00:05:21.085 "dhchap_dhgroups": [ 00:05:21.085 "null", 00:05:21.085 "ffdhe2048", 00:05:21.085 "ffdhe3072", 00:05:21.085 "ffdhe4096", 00:05:21.085 "ffdhe6144", 00:05:21.085 "ffdhe8192" 00:05:21.085 ] 00:05:21.085 } 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "method": "bdev_nvme_set_hotplug", 00:05:21.085 "params": { 00:05:21.085 "period_us": 100000, 00:05:21.085 "enable": false 00:05:21.085 } 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "method": "bdev_wait_for_examine" 00:05:21.085 } 00:05:21.085 ] 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "subsystem": "scsi", 00:05:21.085 "config": null 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "subsystem": "scheduler", 00:05:21.085 "config": [ 00:05:21.085 { 00:05:21.085 "method": "framework_set_scheduler", 00:05:21.085 "params": { 00:05:21.085 "name": "static" 00:05:21.085 } 00:05:21.085 } 00:05:21.085 ] 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "subsystem": "vhost_scsi", 00:05:21.085 "config": [] 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "subsystem": "vhost_blk", 00:05:21.085 "config": [] 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "subsystem": "ublk", 00:05:21.085 "config": [] 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "subsystem": "nbd", 00:05:21.085 "config": [] 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "subsystem": "nvmf", 00:05:21.085 "config": [ 00:05:21.085 { 00:05:21.085 "method": "nvmf_set_config", 00:05:21.085 "params": { 00:05:21.085 "discovery_filter": "match_any", 00:05:21.085 "admin_cmd_passthru": { 00:05:21.085 "identify_ctrlr": false 00:05:21.085 } 00:05:21.085 } 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "method": "nvmf_set_max_subsystems", 00:05:21.085 "params": { 00:05:21.085 "max_subsystems": 1024 00:05:21.085 } 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "method": "nvmf_set_crdt", 00:05:21.085 "params": { 00:05:21.085 "crdt1": 0, 00:05:21.085 "crdt2": 0, 00:05:21.085 "crdt3": 0 00:05:21.085 } 00:05:21.085 }, 00:05:21.085 { 00:05:21.085 "method": "nvmf_create_transport", 00:05:21.085 "params": { 00:05:21.085 "trtype": "TCP", 00:05:21.085 "max_queue_depth": 128, 00:05:21.085 "max_io_qpairs_per_ctrlr": 127, 00:05:21.085 "in_capsule_data_size": 4096, 00:05:21.085 "max_io_size": 131072, 00:05:21.085 "io_unit_size": 131072, 00:05:21.086 "max_aq_depth": 128, 00:05:21.086 "num_shared_buffers": 511, 00:05:21.086 "buf_cache_size": 4294967295, 00:05:21.086 "dif_insert_or_strip": false, 00:05:21.086 "zcopy": false, 00:05:21.086 "c2h_success": true, 00:05:21.086 "sock_priority": 0, 00:05:21.086 "abort_timeout_sec": 1, 00:05:21.086 "ack_timeout": 0, 00:05:21.086 "data_wr_pool_size": 0 00:05:21.086 } 00:05:21.086 } 00:05:21.086 ] 00:05:21.086 }, 00:05:21.086 { 00:05:21.086 "subsystem": "iscsi", 00:05:21.086 "config": [ 00:05:21.086 { 00:05:21.086 "method": "iscsi_set_options", 00:05:21.086 "params": { 00:05:21.086 "node_base": "iqn.2016-06.io.spdk", 00:05:21.086 "max_sessions": 128, 00:05:21.086 "max_connections_per_session": 2, 00:05:21.086 "max_queue_depth": 64, 00:05:21.086 "default_time2wait": 2, 00:05:21.086 "default_time2retain": 20, 00:05:21.086 "first_burst_length": 8192, 00:05:21.086 "immediate_data": true, 00:05:21.086 "allow_duplicated_isid": false, 00:05:21.086 "error_recovery_level": 0, 00:05:21.086 "nop_timeout": 60, 00:05:21.086 "nop_in_interval": 30, 00:05:21.086 "disable_chap": false, 00:05:21.086 "require_chap": false, 00:05:21.086 "mutual_chap": false, 00:05:21.086 "chap_group": 0, 00:05:21.086 "max_large_datain_per_connection": 64, 00:05:21.086 "max_r2t_per_connection": 4, 00:05:21.086 "pdu_pool_size": 36864, 00:05:21.086 "immediate_data_pool_size": 16384, 00:05:21.086 "data_out_pool_size": 2048 00:05:21.086 } 00:05:21.086 } 00:05:21.086 ] 00:05:21.086 } 00:05:21.086 ] 00:05:21.086 } 00:05:21.086 10:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:21.086 10:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2297686 00:05:21.086 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2297686 ']' 00:05:21.086 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2297686 00:05:21.086 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:21.086 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.086 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2297686 00:05:21.344 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:21.344 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:21.344 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2297686' 00:05:21.344 killing process with pid 2297686 00:05:21.344 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2297686 00:05:21.344 10:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2297686 00:05:21.602 10:20:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2297836 00:05:21.602 10:20:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:21.602 10:20:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:26.862 10:20:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2297836 00:05:26.862 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2297836 ']' 00:05:26.862 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2297836 00:05:26.862 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:26.862 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.862 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2297836 00:05:26.862 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.862 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.862 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2297836' 00:05:26.862 killing process with pid 2297836 00:05:26.862 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2297836 00:05:26.862 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2297836 00:05:27.120 10:20:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:27.120 10:20:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:27.120 00:05:27.120 real 0m7.171s 00:05:27.120 user 0m6.897s 00:05:27.120 sys 0m0.769s 00:05:27.120 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.120 10:20:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.120 ************************************ 00:05:27.120 END TEST skip_rpc_with_json 00:05:27.120 ************************************ 00:05:27.120 10:20:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:27.120 10:20:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.120 10:20:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.120 10:20:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.377 ************************************ 00:05:27.377 START TEST skip_rpc_with_delay 00:05:27.377 ************************************ 00:05:27.377 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:27.377 10:20:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.377 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:27.377 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.377 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.377 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.377 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.377 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.378 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.378 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.378 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.378 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:27.378 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.378 [2024-07-25 10:20:30.895604] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:27.378 [2024-07-25 10:20:30.895722] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:27.378 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:27.378 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:27.378 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:27.378 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:27.378 00:05:27.378 real 0m0.079s 00:05:27.378 user 0m0.046s 00:05:27.378 sys 0m0.032s 00:05:27.378 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.378 10:20:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:27.378 ************************************ 00:05:27.378 END TEST skip_rpc_with_delay 00:05:27.378 ************************************ 00:05:27.378 10:20:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:27.378 10:20:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:27.378 10:20:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:27.378 10:20:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.378 10:20:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.378 10:20:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.378 ************************************ 00:05:27.378 START TEST exit_on_failed_rpc_init 00:05:27.378 ************************************ 00:05:27.378 10:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:27.378 10:20:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2298549 00:05:27.378 10:20:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.378 10:20:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2298549 00:05:27.378 10:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2298549 ']' 00:05:27.378 10:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.378 10:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.378 10:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.378 10:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.378 10:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.378 [2024-07-25 10:20:31.024532] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:27.378 [2024-07-25 10:20:31.024625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2298549 ] 00:05:27.635 [2024-07-25 10:20:31.103238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.635 [2024-07-25 10:20:31.223271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.568 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.568 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:28.568 10:20:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.568 10:20:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.568 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:28.568 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.568 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.568 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.568 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.568 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.569 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.569 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.569 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.569 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:28.569 10:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.569 [2024-07-25 10:20:32.008831] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:28.569 [2024-07-25 10:20:32.008924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2298688 ] 00:05:28.569 [2024-07-25 10:20:32.089892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.569 [2024-07-25 10:20:32.212903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.569 [2024-07-25 10:20:32.213022] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:28.569 [2024-07-25 10:20:32.213044] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:28.569 [2024-07-25 10:20:32.213057] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2298549 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2298549 ']' 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2298549 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2298549 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2298549' 00:05:28.866 killing process with pid 2298549 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2298549 00:05:28.866 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2298549 00:05:29.433 00:05:29.433 real 0m1.881s 00:05:29.433 user 0m2.215s 00:05:29.433 sys 0m0.541s 00:05:29.433 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.433 10:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.433 ************************************ 00:05:29.433 END TEST exit_on_failed_rpc_init 00:05:29.433 ************************************ 00:05:29.433 10:20:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:29.433 00:05:29.433 real 0m14.876s 00:05:29.433 user 0m14.399s 00:05:29.433 sys 0m1.858s 00:05:29.433 10:20:32 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.433 10:20:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.433 ************************************ 00:05:29.433 END TEST skip_rpc 00:05:29.433 ************************************ 00:05:29.433 10:20:32 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:29.433 10:20:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.433 10:20:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.433 10:20:32 -- common/autotest_common.sh@10 -- # set +x 00:05:29.433 ************************************ 00:05:29.433 START TEST rpc_client 00:05:29.433 ************************************ 00:05:29.433 10:20:32 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:29.433 * Looking for test storage... 00:05:29.433 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:05:29.433 10:20:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:29.433 OK 00:05:29.433 10:20:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:29.433 00:05:29.433 real 0m0.072s 00:05:29.433 user 0m0.020s 00:05:29.433 sys 0m0.057s 00:05:29.433 10:20:32 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.433 10:20:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:29.433 ************************************ 00:05:29.433 END TEST rpc_client 00:05:29.433 ************************************ 00:05:29.433 10:20:33 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:29.433 10:20:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.433 10:20:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.433 10:20:33 -- common/autotest_common.sh@10 -- # set +x 00:05:29.433 ************************************ 00:05:29.433 START TEST json_config 00:05:29.433 ************************************ 00:05:29.433 10:20:33 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:29.433 10:20:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.433 10:20:33 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:05:29.433 10:20:33 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.433 10:20:33 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.433 10:20:33 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.434 10:20:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.434 10:20:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.434 10:20:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.434 10:20:33 json_config -- paths/export.sh@5 -- # export PATH 00:05:29.434 10:20:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.434 10:20:33 json_config -- nvmf/common.sh@47 -- # : 0 00:05:29.434 10:20:33 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:29.434 10:20:33 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:29.434 10:20:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.434 10:20:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.434 10:20:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.434 10:20:33 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:29.434 10:20:33 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:29.434 10:20:33 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:29.434 INFO: JSON configuration test init 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:29.434 10:20:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.434 10:20:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:29.434 10:20:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.434 10:20:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.434 10:20:33 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:29.434 10:20:33 json_config -- json_config/common.sh@9 -- # local app=target 00:05:29.434 10:20:33 json_config -- json_config/common.sh@10 -- # shift 00:05:29.434 10:20:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:29.434 10:20:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:29.434 10:20:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:29.434 10:20:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.434 10:20:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.434 10:20:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2298934 00:05:29.434 10:20:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:29.434 10:20:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:29.434 Waiting for target to run... 00:05:29.434 10:20:33 json_config -- json_config/common.sh@25 -- # waitforlisten 2298934 /var/tmp/spdk_tgt.sock 00:05:29.434 10:20:33 json_config -- common/autotest_common.sh@831 -- # '[' -z 2298934 ']' 00:05:29.434 10:20:33 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.434 10:20:33 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.434 10:20:33 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.434 10:20:33 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.434 10:20:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.884 [2024-07-25 10:20:33.146059] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:29.884 [2024-07-25 10:20:33.146156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2298934 ] 00:05:29.884 [2024-07-25 10:20:33.512958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.141 [2024-07-25 10:20:33.604708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.399 10:20:34 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.399 10:20:34 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:30.399 10:20:34 json_config -- json_config/common.sh@26 -- # echo '' 00:05:30.399 00:05:30.399 10:20:34 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:30.399 10:20:34 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:30.399 10:20:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:30.399 10:20:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.399 10:20:34 json_config -- json_config/json_config.sh@99 -- # [[ 1 -eq 1 ]] 00:05:30.399 10:20:34 json_config -- json_config/json_config.sh@100 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:05:30.399 10:20:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:05:30.656 10:20:34 json_config -- json_config/json_config.sh@101 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:30.656 10:20:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:30.913 [2024-07-25 10:20:34.591794] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:05:30.913 10:20:34 json_config -- json_config/json_config.sh@102 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:30.913 10:20:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:31.171 [2024-07-25 10:20:34.876520] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:05:31.428 10:20:34 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:31.428 10:20:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:31.428 10:20:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.428 10:20:34 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:31.428 10:20:34 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:31.428 10:20:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:31.685 [2024-07-25 10:20:35.169499] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:36.948 10:20:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.948 10:20:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:36.948 10:20:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@51 -- # sort 00:05:36.948 10:20:40 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:36.949 10:20:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.949 10:20:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:05:36.949 10:20:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.949 10:20:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:05:36.949 10:20:40 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:05:36.949 10:20:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:05:37.207 10:20:40 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:05:37.207 10:20:40 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:37.207 10:20:40 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:37.207 10:20:40 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:05:37.207 10:20:40 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:05:37.207 10:20:40 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:05:37.207 10:20:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:05:37.465 Nvme0n1p0 Nvme0n1p1 00:05:37.465 10:20:41 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:05:37.465 10:20:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:05:37.723 [2024-07-25 10:20:41.350966] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:37.723 [2024-07-25 10:20:41.351030] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:37.723 00:05:37.723 10:20:41 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:05:37.723 10:20:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:05:37.981 Malloc3 00:05:37.981 10:20:41 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:37.981 10:20:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:38.239 [2024-07-25 10:20:41.852386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:38.239 [2024-07-25 10:20:41.852457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.239 [2024-07-25 10:20:41.852485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x877ce0 00:05:38.239 [2024-07-25 10:20:41.852499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.239 [2024-07-25 10:20:41.854059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.239 [2024-07-25 10:20:41.854089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:05:38.239 PTBdevFromMalloc3 00:05:38.239 10:20:41 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:05:38.239 10:20:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:05:38.498 Null0 00:05:38.498 10:20:42 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:05:38.498 10:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:05:38.756 Malloc0 00:05:38.756 10:20:42 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:05:38.756 10:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:05:39.015 Malloc1 00:05:39.015 10:20:42 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:05:39.015 10:20:42 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:05:39.274 102400+0 records in 00:05:39.274 102400+0 records out 00:05:39.274 104857600 bytes (105 MB, 100 MiB) copied, 0.18805 s, 558 MB/s 00:05:39.274 10:20:42 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:05:39.274 10:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:05:39.532 aio_disk 00:05:39.532 10:20:43 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:05:39.532 10:20:43 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:05:39.532 10:20:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:05:42.071 3816983e-4fbe-472a-936e-84dd93e6674c 00:05:42.071 10:20:45 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:05:42.071 10:20:45 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:05:42.071 10:20:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:05:42.329 10:20:45 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:05:42.329 10:20:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:05:42.588 10:20:46 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:05:42.588 10:20:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:05:42.588 10:20:46 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:05:42.588 10:20:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:05:42.848 10:20:46 json_config -- json_config/json_config.sh@161 -- # [[ 1 -eq 1 ]] 00:05:42.848 10:20:46 json_config -- json_config/json_config.sh@162 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:05:42.848 10:20:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:05:43.107 MallocForCryptoBdev 00:05:43.107 10:20:46 json_config -- json_config/json_config.sh@163 -- # lspci -d:37c8 00:05:43.107 10:20:46 json_config -- json_config/json_config.sh@163 -- # wc -l 00:05:43.107 10:20:46 json_config -- json_config/json_config.sh@163 -- # [[ 3 -eq 0 ]] 00:05:43.107 10:20:46 json_config -- json_config/json_config.sh@166 -- # local crypto_driver=crypto_qat 00:05:43.365 10:20:46 json_config -- json_config/json_config.sh@169 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:05:43.365 10:20:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:05:43.365 [2024-07-25 10:20:47.046715] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:05:43.365 CryptoMallocBdev 00:05:43.365 10:20:47 json_config -- json_config/json_config.sh@173 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:05:43.365 10:20:47 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:05:43.365 10:20:47 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:b0f401c7-8304-4dac-a96a-a86913f34d98 bdev_register:5a2a610b-c0c4-400f-b084-aa9f1761259d bdev_register:ed8f90dc-0506-43aa-b568-9b64084ca8df bdev_register:26cee8e8-b044-4f51-b875-3bb10c44b952 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:05:43.365 10:20:47 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:05:43.365 10:20:47 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:05:43.365 10:20:47 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:05:43.366 10:20:47 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:b0f401c7-8304-4dac-a96a-a86913f34d98 bdev_register:5a2a610b-c0c4-400f-b084-aa9f1761259d bdev_register:ed8f90dc-0506-43aa-b568-9b64084ca8df bdev_register:26cee8e8-b044-4f51-b875-3bb10c44b952 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:05:43.366 10:20:47 json_config -- json_config/json_config.sh@75 -- # sort 00:05:43.366 10:20:47 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:05:43.366 10:20:47 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:05:43.366 10:20:47 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:05:43.366 10:20:47 json_config -- json_config/json_config.sh@76 -- # sort 00:05:43.366 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.366 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.366 10:20:47 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:05:43.366 10:20:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:05:43.366 10:20:47 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.624 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:05:43.625 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.625 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.625 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:05:43.625 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.625 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.625 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:b0f401c7-8304-4dac-a96a-a86913f34d98 00:05:43.625 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.625 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.625 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:5a2a610b-c0c4-400f-b084-aa9f1761259d 00:05:43.625 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.625 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:ed8f90dc-0506-43aa-b568-9b64084ca8df 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:26cee8e8-b044-4f51-b875-3bb10c44b952 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:MallocForCryptoBdev 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:CryptoMallocBdev 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:26cee8e8-b044-4f51-b875-3bb10c44b952 bdev_register:5a2a610b-c0c4-400f-b084-aa9f1761259d bdev_register:aio_disk bdev_register:b0f401c7-8304-4dac-a96a-a86913f34d98 bdev_register:CryptoMallocBdev bdev_register:ed8f90dc-0506-43aa-b568-9b64084ca8df bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\6\c\e\e\8\e\8\-\b\0\4\4\-\4\f\5\1\-\b\8\7\5\-\3\b\b\1\0\c\4\4\b\9\5\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\a\2\a\6\1\0\b\-\c\0\c\4\-\4\0\0\f\-\b\0\8\4\-\a\a\9\f\1\7\6\1\2\5\9\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\0\f\4\0\1\c\7\-\8\3\0\4\-\4\d\a\c\-\a\9\6\a\-\a\8\6\9\1\3\f\3\4\d\9\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\d\8\f\9\0\d\c\-\0\5\0\6\-\4\3\a\a\-\b\5\6\8\-\9\b\6\4\0\8\4\c\a\8\d\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@90 -- # cat 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:26cee8e8-b044-4f51-b875-3bb10c44b952 bdev_register:5a2a610b-c0c4-400f-b084-aa9f1761259d bdev_register:aio_disk bdev_register:b0f401c7-8304-4dac-a96a-a86913f34d98 bdev_register:CryptoMallocBdev bdev_register:ed8f90dc-0506-43aa-b568-9b64084ca8df bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:05:43.883 Expected events matched: 00:05:43.883 bdev_register:26cee8e8-b044-4f51-b875-3bb10c44b952 00:05:43.883 bdev_register:5a2a610b-c0c4-400f-b084-aa9f1761259d 00:05:43.883 bdev_register:aio_disk 00:05:43.883 bdev_register:b0f401c7-8304-4dac-a96a-a86913f34d98 00:05:43.883 bdev_register:CryptoMallocBdev 00:05:43.883 bdev_register:ed8f90dc-0506-43aa-b568-9b64084ca8df 00:05:43.883 bdev_register:Malloc0 00:05:43.883 bdev_register:Malloc0p0 00:05:43.883 bdev_register:Malloc0p1 00:05:43.883 bdev_register:Malloc0p2 00:05:43.883 bdev_register:Malloc1 00:05:43.883 bdev_register:Malloc3 00:05:43.883 bdev_register:MallocForCryptoBdev 00:05:43.883 bdev_register:Null0 00:05:43.883 bdev_register:Nvme0n1 00:05:43.883 bdev_register:Nvme0n1p0 00:05:43.883 bdev_register:Nvme0n1p1 00:05:43.883 bdev_register:PTBdevFromMalloc3 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:05:43.883 10:20:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.883 10:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:43.883 10:20:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.883 10:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:43.883 10:20:47 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.884 10:20:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:44.142 MallocBdevForConfigChangeCheck 00:05:44.142 10:20:47 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:44.142 10:20:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.142 10:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.142 10:20:47 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:44.142 10:20:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.400 10:20:48 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:44.400 INFO: shutting down applications... 00:05:44.400 10:20:48 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:44.400 10:20:48 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:44.400 10:20:48 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:44.400 10:20:48 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:44.658 [2024-07-25 10:20:48.210350] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:05:46.032 Calling clear_iscsi_subsystem 00:05:46.032 Calling clear_nvmf_subsystem 00:05:46.032 Calling clear_nbd_subsystem 00:05:46.032 Calling clear_ublk_subsystem 00:05:46.032 Calling clear_vhost_blk_subsystem 00:05:46.032 Calling clear_vhost_scsi_subsystem 00:05:46.032 Calling clear_bdev_subsystem 00:05:46.032 10:20:49 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:05:46.032 10:20:49 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:46.032 10:20:49 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:46.032 10:20:49 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.032 10:20:49 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:46.032 10:20:49 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:46.598 10:20:50 json_config -- json_config/json_config.sh@349 -- # break 00:05:46.598 10:20:50 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:46.598 10:20:50 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:46.598 10:20:50 json_config -- json_config/common.sh@31 -- # local app=target 00:05:46.598 10:20:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:46.598 10:20:50 json_config -- json_config/common.sh@35 -- # [[ -n 2298934 ]] 00:05:46.598 10:20:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2298934 00:05:46.598 10:20:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:46.598 10:20:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.598 10:20:50 json_config -- json_config/common.sh@41 -- # kill -0 2298934 00:05:46.598 10:20:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.165 10:20:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.165 10:20:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.165 10:20:50 json_config -- json_config/common.sh@41 -- # kill -0 2298934 00:05:47.165 10:20:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.425 10:20:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.425 10:20:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.425 10:20:51 json_config -- json_config/common.sh@41 -- # kill -0 2298934 00:05:47.425 10:20:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:47.425 10:20:51 json_config -- json_config/common.sh@43 -- # break 00:05:47.425 10:20:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:47.425 10:20:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:47.425 SPDK target shutdown done 00:05:47.425 10:20:51 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:47.425 INFO: relaunching applications... 00:05:47.425 10:20:51 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.425 10:20:51 json_config -- json_config/common.sh@9 -- # local app=target 00:05:47.425 10:20:51 json_config -- json_config/common.sh@10 -- # shift 00:05:47.425 10:20:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:47.425 10:20:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:47.425 10:20:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:47.425 10:20:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.425 10:20:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.425 10:20:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2301228 00:05:47.425 10:20:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.425 10:20:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:47.425 Waiting for target to run... 00:05:47.425 10:20:51 json_config -- json_config/common.sh@25 -- # waitforlisten 2301228 /var/tmp/spdk_tgt.sock 00:05:47.425 10:20:51 json_config -- common/autotest_common.sh@831 -- # '[' -z 2301228 ']' 00:05:47.425 10:20:51 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:47.425 10:20:51 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.425 10:20:51 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:47.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:47.425 10:20:51 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.425 10:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.685 [2024-07-25 10:20:51.158963] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:47.685 [2024-07-25 10:20:51.159065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2301228 ] 00:05:48.254 [2024-07-25 10:20:51.753735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.254 [2024-07-25 10:20:51.857855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.254 [2024-07-25 10:20:51.911990] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:05:48.254 [2024-07-25 10:20:51.920024] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:05:48.254 [2024-07-25 10:20:51.928041] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:05:48.513 [2024-07-25 10:20:52.008865] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:05:51.085 [2024-07-25 10:20:54.268075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:51.085 [2024-07-25 10:20:54.268172] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:51.085 [2024-07-25 10:20:54.268190] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:51.085 [2024-07-25 10:20:54.276115] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:05:51.085 [2024-07-25 10:20:54.276153] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:05:51.085 [2024-07-25 10:20:54.284112] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:51.085 [2024-07-25 10:20:54.284145] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:51.085 [2024-07-25 10:20:54.292145] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:05:51.085 [2024-07-25 10:20:54.292195] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:05:51.085 [2024-07-25 10:20:54.292218] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:53.617 [2024-07-25 10:20:57.172196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:53.617 [2024-07-25 10:20:57.172242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:53.617 [2024-07-25 10:20:57.172272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x106e840 00:05:53.617 [2024-07-25 10:20:57.172287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:53.617 [2024-07-25 10:20:57.172599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:53.617 [2024-07-25 10:20:57.172626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:05:53.876 10:20:57 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.876 10:20:57 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:53.876 10:20:57 json_config -- json_config/common.sh@26 -- # echo '' 00:05:53.876 00:05:53.876 10:20:57 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:53.876 10:20:57 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:53.876 INFO: Checking if target configuration is the same... 00:05:53.876 10:20:57 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:53.876 10:20:57 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:53.876 10:20:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.876 + '[' 2 -ne 2 ']' 00:05:53.876 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:53.876 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:05:53.876 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:05:53.876 +++ basename /dev/fd/62 00:05:53.876 ++ mktemp /tmp/62.XXX 00:05:53.876 + tmp_file_1=/tmp/62.TRV 00:05:53.876 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:53.876 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:53.876 + tmp_file_2=/tmp/spdk_tgt_config.json.b2u 00:05:53.876 + ret=0 00:05:53.876 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:54.135 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:54.135 + diff -u /tmp/62.TRV /tmp/spdk_tgt_config.json.b2u 00:05:54.135 + echo 'INFO: JSON config files are the same' 00:05:54.135 INFO: JSON config files are the same 00:05:54.135 + rm /tmp/62.TRV /tmp/spdk_tgt_config.json.b2u 00:05:54.135 + exit 0 00:05:54.135 10:20:57 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:54.135 10:20:57 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:54.135 INFO: changing configuration and checking if this can be detected... 00:05:54.135 10:20:57 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:54.135 10:20:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:54.393 10:20:58 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.393 10:20:58 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:54.393 10:20:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.393 + '[' 2 -ne 2 ']' 00:05:54.393 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:54.393 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:05:54.393 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:05:54.393 +++ basename /dev/fd/62 00:05:54.393 ++ mktemp /tmp/62.XXX 00:05:54.393 + tmp_file_1=/tmp/62.HNv 00:05:54.393 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.393 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:54.393 + tmp_file_2=/tmp/spdk_tgt_config.json.MDc 00:05:54.393 + ret=0 00:05:54.393 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:54.959 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:54.959 + diff -u /tmp/62.HNv /tmp/spdk_tgt_config.json.MDc 00:05:54.959 + ret=1 00:05:54.959 + echo '=== Start of file: /tmp/62.HNv ===' 00:05:54.959 + cat /tmp/62.HNv 00:05:54.959 + echo '=== End of file: /tmp/62.HNv ===' 00:05:54.959 + echo '' 00:05:54.959 + echo '=== Start of file: /tmp/spdk_tgt_config.json.MDc ===' 00:05:54.959 + cat /tmp/spdk_tgt_config.json.MDc 00:05:54.959 + echo '=== End of file: /tmp/spdk_tgt_config.json.MDc ===' 00:05:54.959 + echo '' 00:05:54.959 + rm /tmp/62.HNv /tmp/spdk_tgt_config.json.MDc 00:05:54.959 + exit 1 00:05:54.959 10:20:58 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:54.959 INFO: configuration change detected. 00:05:54.959 10:20:58 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:54.959 10:20:58 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:54.959 10:20:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:54.959 10:20:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.959 10:20:58 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:54.959 10:20:58 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:54.959 10:20:58 json_config -- json_config/json_config.sh@321 -- # [[ -n 2301228 ]] 00:05:54.959 10:20:58 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:54.959 10:20:58 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:54.959 10:20:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:54.959 10:20:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.959 10:20:58 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:05:54.959 10:20:58 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:05:54.959 10:20:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:05:55.218 10:20:58 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:05:55.218 10:20:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:05:55.476 10:20:58 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:05:55.476 10:20:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:05:55.734 10:20:59 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:05:55.734 10:20:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:05:55.734 10:20:59 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:55.993 10:20:59 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:55.993 10:20:59 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:55.993 10:20:59 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:55.993 10:20:59 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:55.993 10:20:59 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:55.993 10:20:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.993 10:20:59 json_config -- json_config/json_config.sh@327 -- # killprocess 2301228 00:05:55.993 10:20:59 json_config -- common/autotest_common.sh@950 -- # '[' -z 2301228 ']' 00:05:55.993 10:20:59 json_config -- common/autotest_common.sh@954 -- # kill -0 2301228 00:05:55.993 10:20:59 json_config -- common/autotest_common.sh@955 -- # uname 00:05:55.993 10:20:59 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.993 10:20:59 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2301228 00:05:55.993 10:20:59 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.993 10:20:59 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.993 10:20:59 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2301228' 00:05:55.993 killing process with pid 2301228 00:05:55.993 10:20:59 json_config -- common/autotest_common.sh@969 -- # kill 2301228 00:05:55.993 10:20:59 json_config -- common/autotest_common.sh@974 -- # wait 2301228 00:05:57.895 10:21:01 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.895 10:21:01 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:57.895 10:21:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:57.895 10:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.895 10:21:01 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:57.895 10:21:01 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:57.895 INFO: Success 00:05:57.895 00:05:57.895 real 0m28.352s 00:05:57.895 user 0m33.519s 00:05:57.895 sys 0m3.336s 00:05:57.895 10:21:01 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.895 10:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.895 ************************************ 00:05:57.895 END TEST json_config 00:05:57.895 ************************************ 00:05:57.895 10:21:01 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.895 10:21:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.895 10:21:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.895 10:21:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.895 ************************************ 00:05:57.895 START TEST json_config_extra_key 00:05:57.895 ************************************ 00:05:57.895 10:21:01 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.895 10:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.895 10:21:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:57.895 10:21:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.895 10:21:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.895 10:21:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.895 10:21:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:05:57.896 10:21:01 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.896 10:21:01 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.896 10:21:01 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.896 10:21:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.896 10:21:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.896 10:21:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.896 10:21:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:57.896 10:21:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:57.896 10:21:01 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.896 10:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:05:57.896 10:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:57.896 10:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:57.896 10:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.896 10:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:57.896 10:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.896 10:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:57.896 10:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:57.896 10:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:57.896 10:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.896 10:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:57.896 INFO: launching applications... 00:05:57.896 10:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.896 10:21:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:57.896 10:21:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:57.896 10:21:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.896 10:21:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.896 10:21:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.896 10:21:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.896 10:21:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.896 10:21:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2302632 00:05:57.896 10:21:01 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.896 10:21:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.896 Waiting for target to run... 00:05:57.896 10:21:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2302632 /var/tmp/spdk_tgt.sock 00:05:57.896 10:21:01 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2302632 ']' 00:05:57.896 10:21:01 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.896 10:21:01 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.896 10:21:01 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.896 10:21:01 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.896 10:21:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.896 [2024-07-25 10:21:01.540362] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:57.896 [2024-07-25 10:21:01.540475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302632 ] 00:05:58.465 [2024-07-25 10:21:02.067527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.465 [2024-07-25 10:21:02.168098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.031 10:21:02 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.031 10:21:02 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:59.031 10:21:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:59.031 00:05:59.031 10:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:59.031 INFO: shutting down applications... 00:05:59.031 10:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:59.031 10:21:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:59.031 10:21:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:59.031 10:21:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2302632 ]] 00:05:59.031 10:21:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2302632 00:05:59.031 10:21:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:59.031 10:21:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.031 10:21:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2302632 00:05:59.031 10:21:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.600 10:21:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.600 10:21:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.600 10:21:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2302632 00:05:59.600 10:21:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:59.600 10:21:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:59.600 10:21:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:59.600 10:21:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:59.600 SPDK target shutdown done 00:05:59.600 10:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:59.600 Success 00:05:59.600 00:05:59.600 real 0m1.588s 00:05:59.600 user 0m1.250s 00:05:59.600 sys 0m0.630s 00:05:59.600 10:21:03 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.600 10:21:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.600 ************************************ 00:05:59.600 END TEST json_config_extra_key 00:05:59.600 ************************************ 00:05:59.600 10:21:03 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.600 10:21:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.600 10:21:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.600 10:21:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.600 ************************************ 00:05:59.600 START TEST alias_rpc 00:05:59.600 ************************************ 00:05:59.600 10:21:03 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.600 * Looking for test storage... 00:05:59.600 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:05:59.600 10:21:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:59.600 10:21:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2302913 00:05:59.600 10:21:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.600 10:21:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2302913 00:05:59.600 10:21:03 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2302913 ']' 00:05:59.600 10:21:03 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.600 10:21:03 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.600 10:21:03 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.600 10:21:03 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.600 10:21:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.600 [2024-07-25 10:21:03.176870] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:59.600 [2024-07-25 10:21:03.176944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302913 ] 00:05:59.600 [2024-07-25 10:21:03.255636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.857 [2024-07-25 10:21:03.367493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.423 10:21:04 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.423 10:21:04 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:00.423 10:21:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:00.682 10:21:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2302913 00:06:00.682 10:21:04 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2302913 ']' 00:06:00.682 10:21:04 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2302913 00:06:00.682 10:21:04 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:00.682 10:21:04 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.682 10:21:04 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2302913 00:06:00.941 10:21:04 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.941 10:21:04 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.941 10:21:04 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2302913' 00:06:00.941 killing process with pid 2302913 00:06:00.941 10:21:04 alias_rpc -- common/autotest_common.sh@969 -- # kill 2302913 00:06:00.941 10:21:04 alias_rpc -- common/autotest_common.sh@974 -- # wait 2302913 00:06:01.200 00:06:01.200 real 0m1.830s 00:06:01.200 user 0m2.060s 00:06:01.200 sys 0m0.500s 00:06:01.200 10:21:04 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.200 10:21:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.200 ************************************ 00:06:01.200 END TEST alias_rpc 00:06:01.200 ************************************ 00:06:01.459 10:21:04 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:01.459 10:21:04 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:01.459 10:21:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.459 10:21:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.459 10:21:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.459 ************************************ 00:06:01.459 START TEST spdkcli_tcp 00:06:01.459 ************************************ 00:06:01.459 10:21:04 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:01.459 * Looking for test storage... 00:06:01.459 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:06:01.459 10:21:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:06:01.459 10:21:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:01.459 10:21:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:06:01.459 10:21:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:01.459 10:21:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:01.459 10:21:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:01.459 10:21:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:01.459 10:21:04 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:01.459 10:21:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.459 10:21:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2303166 00:06:01.459 10:21:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:01.459 10:21:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2303166 00:06:01.459 10:21:04 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2303166 ']' 00:06:01.459 10:21:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.459 10:21:04 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.459 10:21:04 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.459 10:21:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.459 10:21:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.459 [2024-07-25 10:21:05.057411] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:01.459 [2024-07-25 10:21:05.057501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2303166 ] 00:06:01.459 [2024-07-25 10:21:05.133287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.718 [2024-07-25 10:21:05.246961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.718 [2024-07-25 10:21:05.246964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.286 10:21:05 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.286 10:21:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:02.286 10:21:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2303302 00:06:02.286 10:21:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:02.286 10:21:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:02.545 [ 00:06:02.545 "bdev_malloc_delete", 00:06:02.545 "bdev_malloc_create", 00:06:02.545 "bdev_null_resize", 00:06:02.545 "bdev_null_delete", 00:06:02.545 "bdev_null_create", 00:06:02.545 "bdev_nvme_cuse_unregister", 00:06:02.545 "bdev_nvme_cuse_register", 00:06:02.545 "bdev_opal_new_user", 00:06:02.545 "bdev_opal_set_lock_state", 00:06:02.545 "bdev_opal_delete", 00:06:02.545 "bdev_opal_get_info", 00:06:02.545 "bdev_opal_create", 00:06:02.545 "bdev_nvme_opal_revert", 00:06:02.545 "bdev_nvme_opal_init", 00:06:02.545 "bdev_nvme_send_cmd", 00:06:02.545 "bdev_nvme_get_path_iostat", 00:06:02.545 "bdev_nvme_get_mdns_discovery_info", 00:06:02.545 "bdev_nvme_stop_mdns_discovery", 00:06:02.545 "bdev_nvme_start_mdns_discovery", 00:06:02.545 "bdev_nvme_set_multipath_policy", 00:06:02.545 "bdev_nvme_set_preferred_path", 00:06:02.545 "bdev_nvme_get_io_paths", 00:06:02.545 "bdev_nvme_remove_error_injection", 00:06:02.545 "bdev_nvme_add_error_injection", 00:06:02.545 "bdev_nvme_get_discovery_info", 00:06:02.545 "bdev_nvme_stop_discovery", 00:06:02.545 "bdev_nvme_start_discovery", 00:06:02.545 "bdev_nvme_get_controller_health_info", 00:06:02.545 "bdev_nvme_disable_controller", 00:06:02.545 "bdev_nvme_enable_controller", 00:06:02.545 "bdev_nvme_reset_controller", 00:06:02.545 "bdev_nvme_get_transport_statistics", 00:06:02.545 "bdev_nvme_apply_firmware", 00:06:02.545 "bdev_nvme_detach_controller", 00:06:02.545 "bdev_nvme_get_controllers", 00:06:02.545 "bdev_nvme_attach_controller", 00:06:02.545 "bdev_nvme_set_hotplug", 00:06:02.545 "bdev_nvme_set_options", 00:06:02.545 "bdev_passthru_delete", 00:06:02.545 "bdev_passthru_create", 00:06:02.545 "bdev_lvol_set_parent_bdev", 00:06:02.545 "bdev_lvol_set_parent", 00:06:02.545 "bdev_lvol_check_shallow_copy", 00:06:02.545 "bdev_lvol_start_shallow_copy", 00:06:02.545 "bdev_lvol_grow_lvstore", 00:06:02.545 "bdev_lvol_get_lvols", 00:06:02.545 "bdev_lvol_get_lvstores", 00:06:02.545 "bdev_lvol_delete", 00:06:02.545 "bdev_lvol_set_read_only", 00:06:02.545 "bdev_lvol_resize", 00:06:02.545 "bdev_lvol_decouple_parent", 00:06:02.545 "bdev_lvol_inflate", 00:06:02.545 "bdev_lvol_rename", 00:06:02.545 "bdev_lvol_clone_bdev", 00:06:02.545 "bdev_lvol_clone", 00:06:02.545 "bdev_lvol_snapshot", 00:06:02.546 "bdev_lvol_create", 00:06:02.546 "bdev_lvol_delete_lvstore", 00:06:02.546 "bdev_lvol_rename_lvstore", 00:06:02.546 "bdev_lvol_create_lvstore", 00:06:02.546 "bdev_raid_set_options", 00:06:02.546 "bdev_raid_remove_base_bdev", 00:06:02.546 "bdev_raid_add_base_bdev", 00:06:02.546 "bdev_raid_delete", 00:06:02.546 "bdev_raid_create", 00:06:02.546 "bdev_raid_get_bdevs", 00:06:02.546 "bdev_error_inject_error", 00:06:02.546 "bdev_error_delete", 00:06:02.546 "bdev_error_create", 00:06:02.546 "bdev_split_delete", 00:06:02.546 "bdev_split_create", 00:06:02.546 "bdev_delay_delete", 00:06:02.546 "bdev_delay_create", 00:06:02.546 "bdev_delay_update_latency", 00:06:02.546 "bdev_zone_block_delete", 00:06:02.546 "bdev_zone_block_create", 00:06:02.546 "blobfs_create", 00:06:02.546 "blobfs_detect", 00:06:02.546 "blobfs_set_cache_size", 00:06:02.546 "bdev_crypto_delete", 00:06:02.546 "bdev_crypto_create", 00:06:02.546 "bdev_compress_delete", 00:06:02.546 "bdev_compress_create", 00:06:02.546 "bdev_compress_get_orphans", 00:06:02.546 "bdev_aio_delete", 00:06:02.546 "bdev_aio_rescan", 00:06:02.546 "bdev_aio_create", 00:06:02.546 "bdev_ftl_set_property", 00:06:02.546 "bdev_ftl_get_properties", 00:06:02.546 "bdev_ftl_get_stats", 00:06:02.546 "bdev_ftl_unmap", 00:06:02.546 "bdev_ftl_unload", 00:06:02.546 "bdev_ftl_delete", 00:06:02.546 "bdev_ftl_load", 00:06:02.546 "bdev_ftl_create", 00:06:02.546 "bdev_virtio_attach_controller", 00:06:02.546 "bdev_virtio_scsi_get_devices", 00:06:02.546 "bdev_virtio_detach_controller", 00:06:02.546 "bdev_virtio_blk_set_hotplug", 00:06:02.546 "bdev_iscsi_delete", 00:06:02.546 "bdev_iscsi_create", 00:06:02.546 "bdev_iscsi_set_options", 00:06:02.546 "accel_error_inject_error", 00:06:02.546 "ioat_scan_accel_module", 00:06:02.546 "dsa_scan_accel_module", 00:06:02.546 "iaa_scan_accel_module", 00:06:02.546 "dpdk_cryptodev_get_driver", 00:06:02.546 "dpdk_cryptodev_set_driver", 00:06:02.546 "dpdk_cryptodev_scan_accel_module", 00:06:02.546 "compressdev_scan_accel_module", 00:06:02.546 "keyring_file_remove_key", 00:06:02.546 "keyring_file_add_key", 00:06:02.546 "keyring_linux_set_options", 00:06:02.546 "iscsi_get_histogram", 00:06:02.546 "iscsi_enable_histogram", 00:06:02.546 "iscsi_set_options", 00:06:02.546 "iscsi_get_auth_groups", 00:06:02.546 "iscsi_auth_group_remove_secret", 00:06:02.546 "iscsi_auth_group_add_secret", 00:06:02.546 "iscsi_delete_auth_group", 00:06:02.546 "iscsi_create_auth_group", 00:06:02.546 "iscsi_set_discovery_auth", 00:06:02.546 "iscsi_get_options", 00:06:02.546 "iscsi_target_node_request_logout", 00:06:02.546 "iscsi_target_node_set_redirect", 00:06:02.546 "iscsi_target_node_set_auth", 00:06:02.546 "iscsi_target_node_add_lun", 00:06:02.546 "iscsi_get_stats", 00:06:02.546 "iscsi_get_connections", 00:06:02.546 "iscsi_portal_group_set_auth", 00:06:02.546 "iscsi_start_portal_group", 00:06:02.546 "iscsi_delete_portal_group", 00:06:02.546 "iscsi_create_portal_group", 00:06:02.546 "iscsi_get_portal_groups", 00:06:02.546 "iscsi_delete_target_node", 00:06:02.546 "iscsi_target_node_remove_pg_ig_maps", 00:06:02.546 "iscsi_target_node_add_pg_ig_maps", 00:06:02.546 "iscsi_create_target_node", 00:06:02.546 "iscsi_get_target_nodes", 00:06:02.546 "iscsi_delete_initiator_group", 00:06:02.546 "iscsi_initiator_group_remove_initiators", 00:06:02.546 "iscsi_initiator_group_add_initiators", 00:06:02.546 "iscsi_create_initiator_group", 00:06:02.546 "iscsi_get_initiator_groups", 00:06:02.546 "nvmf_set_crdt", 00:06:02.546 "nvmf_set_config", 00:06:02.546 "nvmf_set_max_subsystems", 00:06:02.546 "nvmf_stop_mdns_prr", 00:06:02.546 "nvmf_publish_mdns_prr", 00:06:02.546 "nvmf_subsystem_get_listeners", 00:06:02.546 "nvmf_subsystem_get_qpairs", 00:06:02.546 "nvmf_subsystem_get_controllers", 00:06:02.546 "nvmf_get_stats", 00:06:02.546 "nvmf_get_transports", 00:06:02.546 "nvmf_create_transport", 00:06:02.546 "nvmf_get_targets", 00:06:02.546 "nvmf_delete_target", 00:06:02.546 "nvmf_create_target", 00:06:02.546 "nvmf_subsystem_allow_any_host", 00:06:02.546 "nvmf_subsystem_remove_host", 00:06:02.546 "nvmf_subsystem_add_host", 00:06:02.546 "nvmf_ns_remove_host", 00:06:02.546 "nvmf_ns_add_host", 00:06:02.546 "nvmf_subsystem_remove_ns", 00:06:02.546 "nvmf_subsystem_add_ns", 00:06:02.546 "nvmf_subsystem_listener_set_ana_state", 00:06:02.546 "nvmf_discovery_get_referrals", 00:06:02.546 "nvmf_discovery_remove_referral", 00:06:02.546 "nvmf_discovery_add_referral", 00:06:02.546 "nvmf_subsystem_remove_listener", 00:06:02.546 "nvmf_subsystem_add_listener", 00:06:02.546 "nvmf_delete_subsystem", 00:06:02.546 "nvmf_create_subsystem", 00:06:02.546 "nvmf_get_subsystems", 00:06:02.546 "env_dpdk_get_mem_stats", 00:06:02.546 "nbd_get_disks", 00:06:02.546 "nbd_stop_disk", 00:06:02.546 "nbd_start_disk", 00:06:02.546 "ublk_recover_disk", 00:06:02.546 "ublk_get_disks", 00:06:02.546 "ublk_stop_disk", 00:06:02.546 "ublk_start_disk", 00:06:02.546 "ublk_destroy_target", 00:06:02.546 "ublk_create_target", 00:06:02.546 "virtio_blk_create_transport", 00:06:02.546 "virtio_blk_get_transports", 00:06:02.546 "vhost_controller_set_coalescing", 00:06:02.546 "vhost_get_controllers", 00:06:02.546 "vhost_delete_controller", 00:06:02.546 "vhost_create_blk_controller", 00:06:02.546 "vhost_scsi_controller_remove_target", 00:06:02.546 "vhost_scsi_controller_add_target", 00:06:02.546 "vhost_start_scsi_controller", 00:06:02.547 "vhost_create_scsi_controller", 00:06:02.547 "thread_set_cpumask", 00:06:02.547 "framework_get_governor", 00:06:02.547 "framework_get_scheduler", 00:06:02.547 "framework_set_scheduler", 00:06:02.547 "framework_get_reactors", 00:06:02.547 "thread_get_io_channels", 00:06:02.547 "thread_get_pollers", 00:06:02.547 "thread_get_stats", 00:06:02.547 "framework_monitor_context_switch", 00:06:02.547 "spdk_kill_instance", 00:06:02.547 "log_enable_timestamps", 00:06:02.547 "log_get_flags", 00:06:02.547 "log_clear_flag", 00:06:02.547 "log_set_flag", 00:06:02.547 "log_get_level", 00:06:02.547 "log_set_level", 00:06:02.547 "log_get_print_level", 00:06:02.547 "log_set_print_level", 00:06:02.547 "framework_enable_cpumask_locks", 00:06:02.547 "framework_disable_cpumask_locks", 00:06:02.547 "framework_wait_init", 00:06:02.547 "framework_start_init", 00:06:02.547 "scsi_get_devices", 00:06:02.547 "bdev_get_histogram", 00:06:02.547 "bdev_enable_histogram", 00:06:02.547 "bdev_set_qos_limit", 00:06:02.547 "bdev_set_qd_sampling_period", 00:06:02.547 "bdev_get_bdevs", 00:06:02.547 "bdev_reset_iostat", 00:06:02.547 "bdev_get_iostat", 00:06:02.547 "bdev_examine", 00:06:02.547 "bdev_wait_for_examine", 00:06:02.547 "bdev_set_options", 00:06:02.547 "notify_get_notifications", 00:06:02.547 "notify_get_types", 00:06:02.547 "accel_get_stats", 00:06:02.547 "accel_set_options", 00:06:02.547 "accel_set_driver", 00:06:02.547 "accel_crypto_key_destroy", 00:06:02.547 "accel_crypto_keys_get", 00:06:02.547 "accel_crypto_key_create", 00:06:02.547 "accel_assign_opc", 00:06:02.547 "accel_get_module_info", 00:06:02.547 "accel_get_opc_assignments", 00:06:02.547 "vmd_rescan", 00:06:02.547 "vmd_remove_device", 00:06:02.547 "vmd_enable", 00:06:02.547 "sock_get_default_impl", 00:06:02.547 "sock_set_default_impl", 00:06:02.547 "sock_impl_set_options", 00:06:02.547 "sock_impl_get_options", 00:06:02.547 "iobuf_get_stats", 00:06:02.547 "iobuf_set_options", 00:06:02.547 "framework_get_pci_devices", 00:06:02.547 "framework_get_config", 00:06:02.547 "framework_get_subsystems", 00:06:02.547 "trace_get_info", 00:06:02.547 "trace_get_tpoint_group_mask", 00:06:02.547 "trace_disable_tpoint_group", 00:06:02.547 "trace_enable_tpoint_group", 00:06:02.547 "trace_clear_tpoint_mask", 00:06:02.547 "trace_set_tpoint_mask", 00:06:02.547 "keyring_get_keys", 00:06:02.547 "spdk_get_version", 00:06:02.547 "rpc_get_methods" 00:06:02.547 ] 00:06:02.547 10:21:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:02.547 10:21:06 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:02.547 10:21:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.805 10:21:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:02.805 10:21:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2303166 00:06:02.805 10:21:06 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2303166 ']' 00:06:02.805 10:21:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2303166 00:06:02.805 10:21:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:02.805 10:21:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.805 10:21:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2303166 00:06:02.805 10:21:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.805 10:21:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.806 10:21:06 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2303166' 00:06:02.806 killing process with pid 2303166 00:06:02.806 10:21:06 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2303166 00:06:02.806 10:21:06 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2303166 00:06:03.064 00:06:03.064 real 0m1.822s 00:06:03.064 user 0m3.416s 00:06:03.064 sys 0m0.519s 00:06:03.064 10:21:06 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.064 10:21:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.064 ************************************ 00:06:03.064 END TEST spdkcli_tcp 00:06:03.064 ************************************ 00:06:03.322 10:21:06 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.322 10:21:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.322 10:21:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.322 10:21:06 -- common/autotest_common.sh@10 -- # set +x 00:06:03.322 ************************************ 00:06:03.322 START TEST dpdk_mem_utility 00:06:03.323 ************************************ 00:06:03.323 10:21:06 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.323 * Looking for test storage... 00:06:03.323 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:06:03.323 10:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:03.323 10:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2303497 00:06:03.323 10:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.323 10:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2303497 00:06:03.323 10:21:06 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2303497 ']' 00:06:03.323 10:21:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.323 10:21:06 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.323 10:21:06 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.323 10:21:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.323 10:21:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.323 [2024-07-25 10:21:06.927623] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:03.323 [2024-07-25 10:21:06.927698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2303497 ] 00:06:03.323 [2024-07-25 10:21:07.002785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.581 [2024-07-25 10:21:07.111760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.518 10:21:07 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.518 10:21:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:04.518 10:21:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:04.518 10:21:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:04.518 10:21:07 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.518 10:21:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.518 { 00:06:04.518 "filename": "/tmp/spdk_mem_dump.txt" 00:06:04.518 } 00:06:04.518 10:21:07 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.518 10:21:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:04.518 DPDK memory size 816.000000 MiB in 2 heap(s) 00:06:04.518 2 heaps totaling size 816.000000 MiB 00:06:04.518 size: 814.000000 MiB heap id: 0 00:06:04.518 size: 2.000000 MiB heap id: 1 00:06:04.518 end heaps---------- 00:06:04.518 8 mempools totaling size 598.116089 MiB 00:06:04.518 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:04.518 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:04.518 size: 84.521057 MiB name: bdev_io_2303497 00:06:04.518 size: 51.011292 MiB name: evtpool_2303497 00:06:04.518 size: 50.003479 MiB name: msgpool_2303497 00:06:04.518 size: 21.763794 MiB name: PDU_Pool 00:06:04.518 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:04.518 size: 0.026123 MiB name: Session_Pool 00:06:04.518 end mempools------- 00:06:04.518 201 memzones totaling size 4.176453 MiB 00:06:04.518 size: 1.000366 MiB name: RG_ring_0_2303497 00:06:04.518 size: 1.000366 MiB name: RG_ring_1_2303497 00:06:04.518 size: 1.000366 MiB name: RG_ring_4_2303497 00:06:04.518 size: 1.000366 MiB name: RG_ring_5_2303497 00:06:04.518 size: 0.125366 MiB name: RG_ring_2_2303497 00:06:04.518 size: 0.015991 MiB name: RG_ring_3_2303497 00:06:04.518 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:04.518 size: 0.000305 MiB name: 0000:84:01.0_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:01.1_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:01.2_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:01.3_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:01.4_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:01.5_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:01.6_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:01.7_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:02.0_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:02.1_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:02.2_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:02.3_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:02.4_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:02.5_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:02.6_qat 00:06:04.518 size: 0.000305 MiB name: 0000:84:02.7_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:01.0_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:01.1_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:01.2_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:01.3_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:01.4_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:01.5_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:01.6_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:01.7_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:02.0_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:02.1_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:02.2_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:02.3_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:02.4_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:02.5_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:02.6_qat 00:06:04.518 size: 0.000305 MiB name: 0000:85:02.7_qat 00:06:04.518 size: 0.000305 MiB name: 0000:86:01.0_qat 00:06:04.518 size: 0.000305 MiB name: 0000:86:01.1_qat 00:06:04.518 size: 0.000305 MiB name: 0000:86:01.2_qat 00:06:04.518 size: 0.000305 MiB name: 0000:86:01.3_qat 00:06:04.518 size: 0.000305 MiB name: 0000:86:01.4_qat 00:06:04.518 size: 0.000305 MiB name: 0000:86:01.5_qat 00:06:04.519 size: 0.000305 MiB name: 0000:86:01.6_qat 00:06:04.519 size: 0.000305 MiB name: 0000:86:01.7_qat 00:06:04.519 size: 0.000305 MiB name: 0000:86:02.0_qat 00:06:04.519 size: 0.000305 MiB name: 0000:86:02.1_qat 00:06:04.519 size: 0.000305 MiB name: 0000:86:02.2_qat 00:06:04.519 size: 0.000305 MiB name: 0000:86:02.3_qat 00:06:04.519 size: 0.000305 MiB name: 0000:86:02.4_qat 00:06:04.519 size: 0.000305 MiB name: 0000:86:02.5_qat 00:06:04.519 size: 0.000305 MiB name: 0000:86:02.6_qat 00:06:04.519 size: 0.000305 MiB name: 0000:86:02.7_qat 00:06:04.519 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_0 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_1 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_2 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_3 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_4 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_5 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_6 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_7 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_8 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_9 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_10 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_11 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_12 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_13 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_14 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_15 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_16 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_17 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_18 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_19 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_20 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_21 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_22 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_23 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_24 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_25 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_26 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_27 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_28 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_29 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_30 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_31 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_32 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_33 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_34 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_35 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_36 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_37 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_38 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_39 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_40 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_41 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_42 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_43 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:04.519 size: 0.000122 MiB name: rte_compressdev_data_44 00:06:04.519 size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:04.520 size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:04.520 size: 0.000122 MiB name: rte_compressdev_data_45 00:06:04.520 size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:04.520 size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:04.520 size: 0.000122 MiB name: rte_compressdev_data_46 00:06:04.520 size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:04.520 size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:04.520 size: 0.000122 MiB name: rte_compressdev_data_47 00:06:04.520 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:04.520 end memzones------- 00:06:04.520 10:21:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:04.520 heap id: 0 total size: 814.000000 MiB number of busy elements: 494 number of free elements: 14 00:06:04.520 list of free elements. size: 11.842712 MiB 00:06:04.520 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:04.520 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:04.520 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:04.520 element at address: 0x200003e00000 with size: 0.996460 MiB 00:06:04.520 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:04.520 element at address: 0x200007000000 with size: 0.991760 MiB 00:06:04.520 element at address: 0x200013800000 with size: 0.978882 MiB 00:06:04.520 element at address: 0x200019200000 with size: 0.937256 MiB 00:06:04.520 element at address: 0x20001aa00000 with size: 0.583252 MiB 00:06:04.520 element at address: 0x200003a00000 with size: 0.498535 MiB 00:06:04.520 element at address: 0x20000b200000 with size: 0.491272 MiB 00:06:04.520 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:04.520 element at address: 0x200019400000 with size: 0.485840 MiB 00:06:04.520 element at address: 0x200027e00000 with size: 0.399597 MiB 00:06:04.520 list of standard malloc elements. size: 199.872437 MiB 00:06:04.520 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:04.520 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:04.520 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:04.520 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:04.520 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:04.520 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:04.520 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:04.520 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:04.520 element at address: 0x20000033b340 with size: 0.004395 MiB 00:06:04.520 element at address: 0x20000033e8c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x200000341e40 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003453c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x200000348940 with size: 0.004395 MiB 00:06:04.520 element at address: 0x20000034bec0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x20000034f440 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003529c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x200000355f40 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003594c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x20000035ca40 with size: 0.004395 MiB 00:06:04.520 element at address: 0x20000035ffc0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x200000363540 with size: 0.004395 MiB 00:06:04.520 element at address: 0x200000366ac0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x20000036a040 with size: 0.004395 MiB 00:06:04.520 element at address: 0x20000036d5c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x200000370b40 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003740c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x200000377640 with size: 0.004395 MiB 00:06:04.520 element at address: 0x20000037abc0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x20000037e140 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003816c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x200000384c40 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003881c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x20000038b740 with size: 0.004395 MiB 00:06:04.520 element at address: 0x20000038ecc0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x200000392240 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003957c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x200000398d40 with size: 0.004395 MiB 00:06:04.520 element at address: 0x20000039c2c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x20000039f840 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003a2dc0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003a6340 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003a98c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003ace40 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003b03c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003b3940 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003b6ec0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003ba440 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003bd9c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003c0f40 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003c44c0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003c7a40 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003cafc0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003ce540 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003d1ac0 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003d5040 with size: 0.004395 MiB 00:06:04.520 element at address: 0x2000003d8d00 with size: 0.004395 MiB 00:06:04.520 element at address: 0x200000339240 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000033a2c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000033c7c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000033d840 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000033fd40 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000340dc0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x2000003432c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000344340 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000346840 with size: 0.004028 MiB 00:06:04.520 element at address: 0x2000003478c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000349dc0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000034ae40 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000034d340 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000034e3c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x2000003508c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000351940 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000353e40 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000354ec0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x2000003573c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000358440 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000035a940 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000035b9c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000035dec0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000035ef40 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000361440 with size: 0.004028 MiB 00:06:04.520 element at address: 0x2000003624c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x2000003649c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000365a40 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000367f40 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000368fc0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000036b4c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000036c540 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000036ea40 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000036fac0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000371fc0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000373040 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000375540 with size: 0.004028 MiB 00:06:04.520 element at address: 0x2000003765c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000378ac0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000379b40 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000037c040 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000037d0c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000037f5c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000380640 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000382b40 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000383bc0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x2000003860c0 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000387140 with size: 0.004028 MiB 00:06:04.520 element at address: 0x200000389640 with size: 0.004028 MiB 00:06:04.520 element at address: 0x20000038a6c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x20000038cbc0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x20000038dc40 with size: 0.004028 MiB 00:06:04.521 element at address: 0x200000390140 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003911c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003936c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x200000394740 with size: 0.004028 MiB 00:06:04.521 element at address: 0x200000396c40 with size: 0.004028 MiB 00:06:04.521 element at address: 0x200000397cc0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x20000039a1c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x20000039b240 with size: 0.004028 MiB 00:06:04.521 element at address: 0x20000039d740 with size: 0.004028 MiB 00:06:04.521 element at address: 0x20000039e7c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003a0cc0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003a1d40 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003a4240 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003a52c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003a77c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003a8840 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003aad40 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003abdc0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003ae2c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003af340 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003b1840 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003b28c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003b4dc0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003b5e40 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003b8340 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003b93c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003bb8c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003bc940 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003bee40 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003bfec0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003c23c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003c3440 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003c5940 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003c69c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003c8ec0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003c9f40 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003cc440 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003cd4c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003cf9c0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003d0a40 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003d2f40 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003d3fc0 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003d6c00 with size: 0.004028 MiB 00:06:04.521 element at address: 0x2000003d7c80 with size: 0.004028 MiB 00:06:04.521 element at address: 0x200000200000 with size: 0.000305 MiB 00:06:04.521 element at address: 0x20000020e880 with size: 0.000305 MiB 00:06:04.521 element at address: 0x200000200140 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200200 with size: 0.000183 MiB 00:06:04.521 element at address: 0x2000002002c0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200380 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200440 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200500 with size: 0.000183 MiB 00:06:04.521 element at address: 0x2000002005c0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200680 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200740 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200800 with size: 0.000183 MiB 00:06:04.521 element at address: 0x2000002008c0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200980 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200a40 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200b00 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200bc0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200c80 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200d40 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000200e00 with size: 0.000183 MiB 00:06:04.521 element at address: 0x2000002090c0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209180 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209240 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209300 with size: 0.000183 MiB 00:06:04.521 element at address: 0x2000002093c0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209480 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209540 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209600 with size: 0.000183 MiB 00:06:04.521 element at address: 0x2000002096c0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209780 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209840 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209900 with size: 0.000183 MiB 00:06:04.521 element at address: 0x2000002099c0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209a80 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209b40 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209c00 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209cc0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209d80 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209e40 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209f00 with size: 0.000183 MiB 00:06:04.521 element at address: 0x200000209fc0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020a080 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020a140 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020a200 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020a2c0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020a380 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020a440 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020a500 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020a5c0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020a680 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020a740 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020a800 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020a8c0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020a980 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020aa40 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020ab00 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020abc0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020ac80 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020ad40 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020ae00 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020aec0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020af80 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020b040 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020b100 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020b1c0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020b280 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020b340 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020b400 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020b4c0 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020b580 with size: 0.000183 MiB 00:06:04.521 element at address: 0x20000020b640 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020b700 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020b7c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020b880 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020b940 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020ba00 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020bac0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020bb80 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020bc40 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020bd00 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020bdc0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020be80 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020bf40 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c000 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c0c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c180 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c240 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c300 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c3c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c480 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c540 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c600 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c6c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c780 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c840 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c900 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020c9c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020ca80 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020cb40 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020cc00 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020ccc0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020cd80 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020ce40 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020cf00 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020cfc0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020d080 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020d140 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020d200 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020d2c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020d380 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020d440 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020d500 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020d5c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020d680 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020d740 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020d800 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020d8c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020d980 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020da40 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020db00 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020dbc0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020dc80 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020dd40 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020de00 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020dec0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020df80 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020e040 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020e100 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020e1c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020e280 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020e340 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020e400 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020e4c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020e580 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020e640 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020e700 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020e7c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020e9c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020ea80 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020eb40 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020ec00 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020ecc0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020ed80 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020ee40 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020ef00 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020efc0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020f080 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020f140 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020f200 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020f2c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020f380 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020f440 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020f500 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020f5c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020f680 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020f740 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020f800 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020f8c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020f980 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020fa40 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020fb00 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020fbc0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020fc80 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020fd40 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020fe00 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020fec0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x20000020ff80 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000210040 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000210100 with size: 0.000183 MiB 00:06:04.522 element at address: 0x2000002101c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000210280 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000210340 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000210400 with size: 0.000183 MiB 00:06:04.522 element at address: 0x2000002104c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000210580 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000210640 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000210700 with size: 0.000183 MiB 00:06:04.522 element at address: 0x2000002107c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000210880 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000210940 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000210a00 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000210c00 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000214ec0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235180 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235240 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235300 with size: 0.000183 MiB 00:06:04.522 element at address: 0x2000002353c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235480 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235540 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235600 with size: 0.000183 MiB 00:06:04.522 element at address: 0x2000002356c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235780 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235840 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235900 with size: 0.000183 MiB 00:06:04.522 element at address: 0x2000002359c0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235a80 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235b40 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235c00 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235cc0 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235d80 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235e40 with size: 0.000183 MiB 00:06:04.522 element at address: 0x200000235f00 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236100 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000002361c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236280 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236340 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236400 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000002364c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236580 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236640 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236700 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000002367c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236880 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236940 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236a00 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236ac0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236b80 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236c40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000236d00 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000338f00 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000338fc0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000033c540 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000033fac0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000343040 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003465c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000349b40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000034d0c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000350640 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000353bc0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000357140 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000035a6c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000035dc40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003611c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000364740 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000367cc0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000036b240 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000036e7c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000371d40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003752c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000378840 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000037bdc0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000037f340 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003828c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000385e40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003893c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000038c940 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000038fec0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000393440 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003969c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200000399f40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000039d4c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003a0a40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003a3fc0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003a7540 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003aaac0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003ae040 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003b15c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003b4b40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003b80c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003bb640 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003bebc0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003c2140 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003c56c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003c8c40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003cc1c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003cf740 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003d2cc0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000003d6840 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:04.523 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e664c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e66580 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6d180 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:04.523 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:04.524 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:04.524 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:04.524 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:04.524 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:04.524 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:04.524 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:04.524 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:04.524 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:04.524 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:04.524 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:04.524 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:04.524 list of memzone associated elements. size: 602.284851 MiB 00:06:04.524 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:04.524 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:04.524 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:04.524 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:04.524 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:04.524 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2303497_0 00:06:04.524 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:04.524 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2303497_0 00:06:04.524 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:04.524 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2303497_0 00:06:04.524 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:04.524 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:04.524 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:04.524 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:04.524 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:04.524 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2303497 00:06:04.524 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:04.524 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2303497 00:06:04.524 element at address: 0x200000236dc0 with size: 1.008118 MiB 00:06:04.524 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2303497 00:06:04.524 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:04.524 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:04.524 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:04.524 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:04.524 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:04.524 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:04.524 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:04.524 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:04.524 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:04.524 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2303497 00:06:04.524 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:04.524 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2303497 00:06:04.524 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:04.524 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2303497 00:06:04.524 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:04.524 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2303497 00:06:04.524 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:06:04.524 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2303497 00:06:04.524 element at address: 0x20000b27dc40 with size: 0.500488 MiB 00:06:04.524 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:04.524 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:04.524 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:04.524 element at address: 0x20001947c600 with size: 0.250488 MiB 00:06:04.524 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:04.524 element at address: 0x200000214f80 with size: 0.125488 MiB 00:06:04.524 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2303497 00:06:04.524 element at address: 0x200000200ec0 with size: 0.031738 MiB 00:06:04.524 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:04.524 element at address: 0x200027e66640 with size: 0.023743 MiB 00:06:04.524 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:04.524 element at address: 0x200000210cc0 with size: 0.016113 MiB 00:06:04.524 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2303497 00:06:04.524 element at address: 0x200027e6c780 with size: 0.002441 MiB 00:06:04.524 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:04.524 element at address: 0x2000003d6300 with size: 0.001282 MiB 00:06:04.524 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:04.524 element at address: 0x2000003d6a40 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:01.0_qat 00:06:04.524 element at address: 0x2000003d2d80 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:01.1_qat 00:06:04.524 element at address: 0x2000003cf800 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:01.2_qat 00:06:04.524 element at address: 0x2000003cc280 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:01.3_qat 00:06:04.524 element at address: 0x2000003c8d00 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:01.4_qat 00:06:04.524 element at address: 0x2000003c5780 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:01.5_qat 00:06:04.524 element at address: 0x2000003c2200 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:01.6_qat 00:06:04.524 element at address: 0x2000003bec80 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:01.7_qat 00:06:04.524 element at address: 0x2000003bb700 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:02.0_qat 00:06:04.524 element at address: 0x2000003b8180 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:02.1_qat 00:06:04.524 element at address: 0x2000003b4c00 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:02.2_qat 00:06:04.524 element at address: 0x2000003b1680 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:02.3_qat 00:06:04.524 element at address: 0x2000003ae100 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:02.4_qat 00:06:04.524 element at address: 0x2000003aab80 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:02.5_qat 00:06:04.524 element at address: 0x2000003a7600 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:02.6_qat 00:06:04.524 element at address: 0x2000003a4080 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:84:02.7_qat 00:06:04.524 element at address: 0x2000003a0b00 with size: 0.000427 MiB 00:06:04.524 associated memzone info: size: 0.000305 MiB name: 0000:85:01.0_qat 00:06:04.524 element at address: 0x20000039d580 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:01.1_qat 00:06:04.525 element at address: 0x20000039a000 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:01.2_qat 00:06:04.525 element at address: 0x200000396a80 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:01.3_qat 00:06:04.525 element at address: 0x200000393500 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:01.4_qat 00:06:04.525 element at address: 0x20000038ff80 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:01.5_qat 00:06:04.525 element at address: 0x20000038ca00 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:01.6_qat 00:06:04.525 element at address: 0x200000389480 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:01.7_qat 00:06:04.525 element at address: 0x200000385f00 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:02.0_qat 00:06:04.525 element at address: 0x200000382980 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:02.1_qat 00:06:04.525 element at address: 0x20000037f400 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:02.2_qat 00:06:04.525 element at address: 0x20000037be80 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:02.3_qat 00:06:04.525 element at address: 0x200000378900 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:02.4_qat 00:06:04.525 element at address: 0x200000375380 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:02.5_qat 00:06:04.525 element at address: 0x200000371e00 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:02.6_qat 00:06:04.525 element at address: 0x20000036e880 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:85:02.7_qat 00:06:04.525 element at address: 0x20000036b300 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:01.0_qat 00:06:04.525 element at address: 0x200000367d80 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:01.1_qat 00:06:04.525 element at address: 0x200000364800 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:01.2_qat 00:06:04.525 element at address: 0x200000361280 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:01.3_qat 00:06:04.525 element at address: 0x20000035dd00 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:01.4_qat 00:06:04.525 element at address: 0x20000035a780 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:01.5_qat 00:06:04.525 element at address: 0x200000357200 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:01.6_qat 00:06:04.525 element at address: 0x200000353c80 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:01.7_qat 00:06:04.525 element at address: 0x200000350700 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:02.0_qat 00:06:04.525 element at address: 0x20000034d180 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:02.1_qat 00:06:04.525 element at address: 0x200000349c00 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:02.2_qat 00:06:04.525 element at address: 0x200000346680 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:02.3_qat 00:06:04.525 element at address: 0x200000343100 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:02.4_qat 00:06:04.525 element at address: 0x20000033fb80 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:02.5_qat 00:06:04.525 element at address: 0x20000033c600 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:02.6_qat 00:06:04.525 element at address: 0x200000339080 with size: 0.000427 MiB 00:06:04.525 associated memzone info: size: 0.000305 MiB name: 0000:86:02.7_qat 00:06:04.525 element at address: 0x2000003d6900 with size: 0.000305 MiB 00:06:04.525 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:04.525 element at address: 0x200000235fc0 with size: 0.000305 MiB 00:06:04.525 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2303497 00:06:04.525 element at address: 0x200000210ac0 with size: 0.000305 MiB 00:06:04.525 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2303497 00:06:04.525 element at address: 0x200027e6d240 with size: 0.000305 MiB 00:06:04.525 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:04.525 element at address: 0x2000003d6240 with size: 0.000183 MiB 00:06:04.525 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:04.525 10:21:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:04.525 10:21:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2303497 00:06:04.525 10:21:08 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2303497 ']' 00:06:04.525 10:21:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2303497 00:06:04.525 10:21:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:04.525 10:21:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.525 10:21:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2303497 00:06:04.525 10:21:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.525 10:21:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.525 10:21:08 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2303497' 00:06:04.525 killing process with pid 2303497 00:06:04.525 10:21:08 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2303497 00:06:04.525 10:21:08 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2303497 00:06:05.089 00:06:05.089 real 0m1.757s 00:06:05.089 user 0m1.953s 00:06:05.089 sys 0m0.486s 00:06:05.089 10:21:08 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.089 10:21:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.089 ************************************ 00:06:05.089 END TEST dpdk_mem_utility 00:06:05.089 ************************************ 00:06:05.089 10:21:08 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:05.089 10:21:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.089 10:21:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.089 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:06:05.089 ************************************ 00:06:05.089 START TEST event 00:06:05.089 ************************************ 00:06:05.089 10:21:08 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:05.089 * Looking for test storage... 00:06:05.089 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:06:05.089 10:21:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:05.089 10:21:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:05.089 10:21:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.089 10:21:08 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:05.089 10:21:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.089 10:21:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.089 ************************************ 00:06:05.089 START TEST event_perf 00:06:05.089 ************************************ 00:06:05.089 10:21:08 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.089 Running I/O for 1 seconds...[2024-07-25 10:21:08.710441] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:05.089 [2024-07-25 10:21:08.710510] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304121 ] 00:06:05.089 [2024-07-25 10:21:08.795952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.347 [2024-07-25 10:21:08.921127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.347 [2024-07-25 10:21:08.921178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.347 [2024-07-25 10:21:08.921235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.347 [2024-07-25 10:21:08.921238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.716 Running I/O for 1 seconds... 00:06:06.716 lcore 0: 235168 00:06:06.716 lcore 1: 235166 00:06:06.716 lcore 2: 235166 00:06:06.716 lcore 3: 235166 00:06:06.716 done. 00:06:06.716 00:06:06.716 real 0m1.355s 00:06:06.716 user 0m4.238s 00:06:06.716 sys 0m0.110s 00:06:06.716 10:21:10 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.716 10:21:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.716 ************************************ 00:06:06.716 END TEST event_perf 00:06:06.716 ************************************ 00:06:06.716 10:21:10 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:06.716 10:21:10 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:06.716 10:21:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.716 10:21:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.716 ************************************ 00:06:06.716 START TEST event_reactor 00:06:06.716 ************************************ 00:06:06.716 10:21:10 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:06.716 [2024-07-25 10:21:10.104698] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:06.716 [2024-07-25 10:21:10.104748] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304482 ] 00:06:06.716 [2024-07-25 10:21:10.184784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.716 [2024-07-25 10:21:10.306705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.088 test_start 00:06:08.088 oneshot 00:06:08.088 tick 100 00:06:08.088 tick 100 00:06:08.088 tick 250 00:06:08.088 tick 100 00:06:08.088 tick 100 00:06:08.088 tick 100 00:06:08.088 tick 250 00:06:08.088 tick 500 00:06:08.088 tick 100 00:06:08.088 tick 100 00:06:08.088 tick 250 00:06:08.088 tick 100 00:06:08.088 tick 100 00:06:08.088 test_end 00:06:08.088 00:06:08.088 real 0m1.342s 00:06:08.088 user 0m1.241s 00:06:08.088 sys 0m0.096s 00:06:08.088 10:21:11 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.088 10:21:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:08.088 ************************************ 00:06:08.088 END TEST event_reactor 00:06:08.088 ************************************ 00:06:08.088 10:21:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:08.088 10:21:11 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:08.088 10:21:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.088 10:21:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.088 ************************************ 00:06:08.088 START TEST event_reactor_perf 00:06:08.088 ************************************ 00:06:08.088 10:21:11 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:08.089 [2024-07-25 10:21:11.496659] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:08.089 [2024-07-25 10:21:11.496723] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304634 ] 00:06:08.089 [2024-07-25 10:21:11.578794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.089 [2024-07-25 10:21:11.701653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.506 test_start 00:06:09.506 test_end 00:06:09.506 Performance: 357864 events per second 00:06:09.506 00:06:09.506 real 0m1.348s 00:06:09.506 user 0m1.239s 00:06:09.506 sys 0m0.103s 00:06:09.506 10:21:12 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.506 10:21:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.506 ************************************ 00:06:09.506 END TEST event_reactor_perf 00:06:09.506 ************************************ 00:06:09.506 10:21:12 event -- event/event.sh@49 -- # uname -s 00:06:09.506 10:21:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:09.506 10:21:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:09.506 10:21:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.506 10:21:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.506 10:21:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.506 ************************************ 00:06:09.506 START TEST event_scheduler 00:06:09.506 ************************************ 00:06:09.506 10:21:12 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:09.506 * Looking for test storage... 00:06:09.506 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:06:09.506 10:21:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:09.506 10:21:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2304820 00:06:09.506 10:21:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:09.506 10:21:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.506 10:21:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2304820 00:06:09.506 10:21:12 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2304820 ']' 00:06:09.506 10:21:12 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.506 10:21:12 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.506 10:21:12 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.506 10:21:12 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.506 10:21:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.506 [2024-07-25 10:21:12.976034] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:09.506 [2024-07-25 10:21:12.976144] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304820 ] 00:06:09.506 [2024-07-25 10:21:13.054760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.506 [2024-07-25 10:21:13.165526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.506 [2024-07-25 10:21:13.165602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.506 [2024-07-25 10:21:13.165658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.506 [2024-07-25 10:21:13.165661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.437 10:21:13 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.437 10:21:13 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:10.437 10:21:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:10.437 10:21:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.437 10:21:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.437 [2024-07-25 10:21:13.912520] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:10.437 [2024-07-25 10:21:13.912551] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:10.437 [2024-07-25 10:21:13.912568] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:10.437 [2024-07-25 10:21:13.912578] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:10.437 [2024-07-25 10:21:13.912588] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:10.437 10:21:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.437 10:21:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:10.437 10:21:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.437 10:21:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.437 [2024-07-25 10:21:14.023940] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:10.437 10:21:14 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.437 10:21:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:10.437 10:21:14 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.437 10:21:14 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.437 10:21:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.437 ************************************ 00:06:10.437 START TEST scheduler_create_thread 00:06:10.437 ************************************ 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.437 2 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.437 3 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.437 4 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.437 5 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.437 6 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.437 7 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.437 8 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.437 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.438 9 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.438 10 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.438 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.695 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.695 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:10.695 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.695 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.695 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.696 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:10.696 10:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:10.696 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.696 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.952 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.952 00:06:10.952 real 0m0.590s 00:06:10.952 user 0m0.013s 00:06:10.952 sys 0m0.000s 00:06:10.952 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.952 10:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.952 ************************************ 00:06:10.952 END TEST scheduler_create_thread 00:06:10.952 ************************************ 00:06:11.209 10:21:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:11.209 10:21:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2304820 00:06:11.209 10:21:14 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2304820 ']' 00:06:11.209 10:21:14 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2304820 00:06:11.209 10:21:14 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:11.209 10:21:14 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.209 10:21:14 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2304820 00:06:11.209 10:21:14 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:11.209 10:21:14 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:11.209 10:21:14 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2304820' 00:06:11.209 killing process with pid 2304820 00:06:11.209 10:21:14 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2304820 00:06:11.209 10:21:14 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2304820 00:06:11.466 [2024-07-25 10:21:15.124113] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:11.725 00:06:11.725 real 0m2.512s 00:06:11.725 user 0m5.248s 00:06:11.725 sys 0m0.363s 00:06:11.725 10:21:15 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.725 10:21:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.725 ************************************ 00:06:11.725 END TEST event_scheduler 00:06:11.725 ************************************ 00:06:11.725 10:21:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:11.725 10:21:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:11.725 10:21:15 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.725 10:21:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.725 10:21:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.983 ************************************ 00:06:11.983 START TEST app_repeat 00:06:11.983 ************************************ 00:06:11.983 10:21:15 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2305149 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2305149' 00:06:11.983 Process app_repeat pid: 2305149 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:11.983 spdk_app_start Round 0 00:06:11.983 10:21:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2305149 /var/tmp/spdk-nbd.sock 00:06:11.983 10:21:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2305149 ']' 00:06:11.983 10:21:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.983 10:21:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.983 10:21:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.983 10:21:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.983 10:21:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.983 [2024-07-25 10:21:15.471402] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:11.983 [2024-07-25 10:21:15.471474] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2305149 ] 00:06:11.983 [2024-07-25 10:21:15.557454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.983 [2024-07-25 10:21:15.670256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.983 [2024-07-25 10:21:15.670262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.241 10:21:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.241 10:21:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:12.241 10:21:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.498 Malloc0 00:06:12.498 10:21:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.756 Malloc1 00:06:12.756 10:21:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.756 10:21:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.014 /dev/nbd0 00:06:13.014 10:21:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.014 10:21:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.014 10:21:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:13.014 10:21:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:13.014 10:21:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:13.014 10:21:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:13.015 10:21:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:13.015 10:21:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:13.015 10:21:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:13.015 10:21:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:13.015 10:21:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.015 1+0 records in 00:06:13.015 1+0 records out 00:06:13.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155377 s, 26.4 MB/s 00:06:13.015 10:21:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:13.015 10:21:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:13.015 10:21:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:13.015 10:21:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:13.015 10:21:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:13.015 10:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.015 10:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.015 10:21:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.273 /dev/nbd1 00:06:13.273 10:21:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.273 10:21:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.273 1+0 records in 00:06:13.273 1+0 records out 00:06:13.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240913 s, 17.0 MB/s 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:13.273 10:21:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:13.273 10:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.273 10:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.273 10:21:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.273 10:21:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.273 10:21:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.530 10:21:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.530 { 00:06:13.530 "nbd_device": "/dev/nbd0", 00:06:13.530 "bdev_name": "Malloc0" 00:06:13.530 }, 00:06:13.530 { 00:06:13.530 "nbd_device": "/dev/nbd1", 00:06:13.530 "bdev_name": "Malloc1" 00:06:13.530 } 00:06:13.531 ]' 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.531 { 00:06:13.531 "nbd_device": "/dev/nbd0", 00:06:13.531 "bdev_name": "Malloc0" 00:06:13.531 }, 00:06:13.531 { 00:06:13.531 "nbd_device": "/dev/nbd1", 00:06:13.531 "bdev_name": "Malloc1" 00:06:13.531 } 00:06:13.531 ]' 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.531 /dev/nbd1' 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.531 /dev/nbd1' 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.531 256+0 records in 00:06:13.531 256+0 records out 00:06:13.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377013 s, 278 MB/s 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.531 256+0 records in 00:06:13.531 256+0 records out 00:06:13.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027549 s, 38.1 MB/s 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.531 10:21:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.789 256+0 records in 00:06:13.789 256+0 records out 00:06:13.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264948 s, 39.6 MB/s 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.789 10:21:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.047 10:21:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.047 10:21:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.047 10:21:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.047 10:21:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.047 10:21:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.047 10:21:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.047 10:21:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.047 10:21:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.047 10:21:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.047 10:21:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.304 10:21:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.304 10:21:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.304 10:21:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.304 10:21:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.304 10:21:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.304 10:21:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.304 10:21:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.304 10:21:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.304 10:21:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.304 10:21:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.304 10:21:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.562 10:21:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.562 10:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.562 10:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.562 10:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.562 10:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.562 10:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.562 10:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.562 10:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.562 10:21:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.562 10:21:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.562 10:21:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.562 10:21:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.562 10:21:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.820 10:21:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.079 [2024-07-25 10:21:18.664635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.079 [2024-07-25 10:21:18.781801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.079 [2024-07-25 10:21:18.781801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.336 [2024-07-25 10:21:18.843788] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.336 [2024-07-25 10:21:18.843855] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.865 10:21:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.865 10:21:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:17.865 spdk_app_start Round 1 00:06:17.865 10:21:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2305149 /var/tmp/spdk-nbd.sock 00:06:17.865 10:21:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2305149 ']' 00:06:17.865 10:21:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.865 10:21:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.865 10:21:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.865 10:21:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.865 10:21:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.123 10:21:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.123 10:21:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:18.123 10:21:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.382 Malloc0 00:06:18.382 10:21:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.639 Malloc1 00:06:18.639 10:21:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.639 10:21:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.897 /dev/nbd0 00:06:18.897 10:21:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.897 10:21:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.897 1+0 records in 00:06:18.897 1+0 records out 00:06:18.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180919 s, 22.6 MB/s 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:18.897 10:21:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:18.897 10:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.897 10:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.897 10:21:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.154 /dev/nbd1 00:06:19.154 10:21:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.154 10:21:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.154 1+0 records in 00:06:19.154 1+0 records out 00:06:19.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205939 s, 19.9 MB/s 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.154 10:21:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:19.154 10:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.154 10:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.154 10:21:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.154 10:21:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.154 10:21:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.412 10:21:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.412 { 00:06:19.412 "nbd_device": "/dev/nbd0", 00:06:19.412 "bdev_name": "Malloc0" 00:06:19.412 }, 00:06:19.412 { 00:06:19.412 "nbd_device": "/dev/nbd1", 00:06:19.412 "bdev_name": "Malloc1" 00:06:19.412 } 00:06:19.412 ]' 00:06:19.412 10:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.412 { 00:06:19.412 "nbd_device": "/dev/nbd0", 00:06:19.412 "bdev_name": "Malloc0" 00:06:19.412 }, 00:06:19.412 { 00:06:19.412 "nbd_device": "/dev/nbd1", 00:06:19.412 "bdev_name": "Malloc1" 00:06:19.412 } 00:06:19.412 ]' 00:06:19.412 10:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.412 /dev/nbd1' 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.412 /dev/nbd1' 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.412 256+0 records in 00:06:19.412 256+0 records out 00:06:19.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503679 s, 208 MB/s 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.412 256+0 records in 00:06:19.412 256+0 records out 00:06:19.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240561 s, 43.6 MB/s 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.412 256+0 records in 00:06:19.412 256+0 records out 00:06:19.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286475 s, 36.6 MB/s 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.412 10:21:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.413 10:21:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.413 10:21:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.413 10:21:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.413 10:21:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.413 10:21:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.413 10:21:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.670 10:21:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.670 10:21:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.670 10:21:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.670 10:21:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.670 10:21:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.670 10:21:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.670 10:21:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.670 10:21:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.670 10:21:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.670 10:21:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.926 10:21:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.926 10:21:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.926 10:21:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.926 10:21:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.926 10:21:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.926 10:21:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.926 10:21:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.926 10:21:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.926 10:21:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.926 10:21:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.183 10:21:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.183 10:21:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.183 10:21:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.183 10:21:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.440 10:21:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.440 10:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.440 10:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.440 10:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.440 10:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.440 10:21:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.440 10:21:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.440 10:21:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.440 10:21:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.440 10:21:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.699 10:21:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.956 [2024-07-25 10:21:24.483219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.956 [2024-07-25 10:21:24.606302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.956 [2024-07-25 10:21:24.606307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.214 [2024-07-25 10:21:24.665671] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.214 [2024-07-25 10:21:24.665729] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.739 10:21:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.739 10:21:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:23.739 spdk_app_start Round 2 00:06:23.739 10:21:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2305149 /var/tmp/spdk-nbd.sock 00:06:23.739 10:21:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2305149 ']' 00:06:23.739 10:21:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.739 10:21:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.739 10:21:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.739 10:21:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.739 10:21:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.739 10:21:27 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.739 10:21:27 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:23.739 10:21:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.997 Malloc0 00:06:23.997 10:21:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.254 Malloc1 00:06:24.512 10:21:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.512 10:21:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.512 /dev/nbd0 00:06:24.512 10:21:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.512 10:21:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.512 10:21:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:24.512 10:21:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:24.512 10:21:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:24.512 10:21:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:24.512 10:21:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:24.770 10:21:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:24.770 10:21:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:24.770 10:21:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:24.770 10:21:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.770 1+0 records in 00:06:24.770 1+0 records out 00:06:24.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219085 s, 18.7 MB/s 00:06:24.770 10:21:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:24.770 10:21:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:24.770 10:21:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:24.770 10:21:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:24.770 10:21:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:24.770 10:21:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.770 10:21:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.770 10:21:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.770 /dev/nbd1 00:06:25.027 10:21:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.027 10:21:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.027 1+0 records in 00:06:25.027 1+0 records out 00:06:25.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204792 s, 20.0 MB/s 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:25.027 10:21:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:25.027 10:21:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.027 10:21:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.027 10:21:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.027 10:21:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.027 10:21:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.285 { 00:06:25.285 "nbd_device": "/dev/nbd0", 00:06:25.285 "bdev_name": "Malloc0" 00:06:25.285 }, 00:06:25.285 { 00:06:25.285 "nbd_device": "/dev/nbd1", 00:06:25.285 "bdev_name": "Malloc1" 00:06:25.285 } 00:06:25.285 ]' 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.285 { 00:06:25.285 "nbd_device": "/dev/nbd0", 00:06:25.285 "bdev_name": "Malloc0" 00:06:25.285 }, 00:06:25.285 { 00:06:25.285 "nbd_device": "/dev/nbd1", 00:06:25.285 "bdev_name": "Malloc1" 00:06:25.285 } 00:06:25.285 ]' 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.285 /dev/nbd1' 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.285 /dev/nbd1' 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.285 256+0 records in 00:06:25.285 256+0 records out 00:06:25.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503768 s, 208 MB/s 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.285 256+0 records in 00:06:25.285 256+0 records out 00:06:25.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273167 s, 38.4 MB/s 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.285 256+0 records in 00:06:25.285 256+0 records out 00:06:25.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267766 s, 39.2 MB/s 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.285 10:21:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.286 10:21:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.543 10:21:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.543 10:21:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.543 10:21:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.543 10:21:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.543 10:21:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.543 10:21:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.543 10:21:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.543 10:21:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.543 10:21:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.543 10:21:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.801 10:21:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.801 10:21:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.801 10:21:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.801 10:21:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.801 10:21:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.801 10:21:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.801 10:21:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.801 10:21:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.801 10:21:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.801 10:21:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.801 10:21:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.059 10:21:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.059 10:21:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.059 10:21:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.059 10:21:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.059 10:21:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.059 10:21:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.059 10:21:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.059 10:21:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.059 10:21:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.059 10:21:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.059 10:21:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.059 10:21:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.059 10:21:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.317 10:21:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.575 [2024-07-25 10:21:30.282313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.832 [2024-07-25 10:21:30.398841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.832 [2024-07-25 10:21:30.398841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.832 [2024-07-25 10:21:30.465089] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.832 [2024-07-25 10:21:30.465216] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.392 10:21:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2305149 /var/tmp/spdk-nbd.sock 00:06:29.392 10:21:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2305149 ']' 00:06:29.392 10:21:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.392 10:21:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.392 10:21:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.392 10:21:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.392 10:21:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.650 10:21:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.650 10:21:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:29.650 10:21:33 event.app_repeat -- event/event.sh@39 -- # killprocess 2305149 00:06:29.650 10:21:33 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2305149 ']' 00:06:29.650 10:21:33 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2305149 00:06:29.650 10:21:33 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:29.650 10:21:33 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.650 10:21:33 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2305149 00:06:29.650 10:21:33 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.650 10:21:33 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.650 10:21:33 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2305149' 00:06:29.650 killing process with pid 2305149 00:06:29.650 10:21:33 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2305149 00:06:29.650 10:21:33 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2305149 00:06:29.916 spdk_app_start is called in Round 0. 00:06:29.916 Shutdown signal received, stop current app iteration 00:06:29.916 Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 reinitialization... 00:06:29.916 spdk_app_start is called in Round 1. 00:06:29.916 Shutdown signal received, stop current app iteration 00:06:29.916 Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 reinitialization... 00:06:29.916 spdk_app_start is called in Round 2. 00:06:29.916 Shutdown signal received, stop current app iteration 00:06:29.916 Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 reinitialization... 00:06:29.916 spdk_app_start is called in Round 3. 00:06:29.916 Shutdown signal received, stop current app iteration 00:06:29.916 10:21:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:29.916 10:21:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:29.916 00:06:29.916 real 0m18.086s 00:06:29.916 user 0m38.980s 00:06:29.916 sys 0m3.303s 00:06:29.916 10:21:33 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.916 10:21:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.916 ************************************ 00:06:29.916 END TEST app_repeat 00:06:29.916 ************************************ 00:06:29.916 10:21:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:29.916 00:06:29.916 real 0m24.937s 00:06:29.916 user 0m51.063s 00:06:29.916 sys 0m4.170s 00:06:29.916 10:21:33 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.916 10:21:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.916 ************************************ 00:06:29.916 END TEST event 00:06:29.916 ************************************ 00:06:29.916 10:21:33 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:29.916 10:21:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.916 10:21:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.916 10:21:33 -- common/autotest_common.sh@10 -- # set +x 00:06:29.916 ************************************ 00:06:29.916 START TEST thread 00:06:29.916 ************************************ 00:06:29.916 10:21:33 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:30.217 * Looking for test storage... 00:06:30.217 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:06:30.217 10:21:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.217 10:21:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:30.217 10:21:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.217 10:21:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.217 ************************************ 00:06:30.217 START TEST thread_poller_perf 00:06:30.217 ************************************ 00:06:30.217 10:21:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.217 [2024-07-25 10:21:33.682235] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:30.217 [2024-07-25 10:21:33.682301] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2307644 ] 00:06:30.217 [2024-07-25 10:21:33.758130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.217 [2024-07-25 10:21:33.874122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.217 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:31.591 ====================================== 00:06:31.591 busy:2708028297 (cyc) 00:06:31.591 total_run_count: 295000 00:06:31.591 tsc_hz: 2700000000 (cyc) 00:06:31.591 ====================================== 00:06:31.591 poller_cost: 9179 (cyc), 3399 (nsec) 00:06:31.591 00:06:31.591 real 0m1.336s 00:06:31.591 user 0m1.237s 00:06:31.591 sys 0m0.094s 00:06:31.591 10:21:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.591 10:21:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.591 ************************************ 00:06:31.591 END TEST thread_poller_perf 00:06:31.591 ************************************ 00:06:31.591 10:21:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:31.591 10:21:35 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:31.591 10:21:35 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.591 10:21:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.591 ************************************ 00:06:31.591 START TEST thread_poller_perf 00:06:31.591 ************************************ 00:06:31.591 10:21:35 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:31.591 [2024-07-25 10:21:35.070972] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:31.591 [2024-07-25 10:21:35.071035] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2307806 ] 00:06:31.591 [2024-07-25 10:21:35.154847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.591 [2024-07-25 10:21:35.275689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.591 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:32.964 ====================================== 00:06:32.964 busy:2703077745 (cyc) 00:06:32.964 total_run_count: 3861000 00:06:32.964 tsc_hz: 2700000000 (cyc) 00:06:32.964 ====================================== 00:06:32.964 poller_cost: 700 (cyc), 259 (nsec) 00:06:32.964 00:06:32.964 real 0m1.352s 00:06:32.964 user 0m1.241s 00:06:32.964 sys 0m0.105s 00:06:32.964 10:21:36 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.964 10:21:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.964 ************************************ 00:06:32.964 END TEST thread_poller_perf 00:06:32.964 ************************************ 00:06:32.964 10:21:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:32.964 00:06:32.964 real 0m2.835s 00:06:32.964 user 0m2.540s 00:06:32.964 sys 0m0.295s 00:06:32.964 10:21:36 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.964 10:21:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.964 ************************************ 00:06:32.964 END TEST thread 00:06:32.964 ************************************ 00:06:32.964 10:21:36 -- spdk/autotest.sh@184 -- # [[ 1 -eq 1 ]] 00:06:32.964 10:21:36 -- spdk/autotest.sh@185 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:32.964 10:21:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.964 10:21:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.965 10:21:36 -- common/autotest_common.sh@10 -- # set +x 00:06:32.965 ************************************ 00:06:32.965 START TEST accel 00:06:32.965 ************************************ 00:06:32.965 10:21:36 accel -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:32.965 * Looking for test storage... 00:06:32.965 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:06:32.965 10:21:36 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:32.965 10:21:36 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:32.965 10:21:36 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:32.965 10:21:36 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2307999 00:06:32.965 10:21:36 accel -- accel/accel.sh@63 -- # waitforlisten 2307999 00:06:32.965 10:21:36 accel -- common/autotest_common.sh@831 -- # '[' -z 2307999 ']' 00:06:32.965 10:21:36 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:32.965 10:21:36 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:32.965 10:21:36 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.965 10:21:36 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.965 10:21:36 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.965 10:21:36 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.965 10:21:36 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.965 10:21:36 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.965 10:21:36 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.965 10:21:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.965 10:21:36 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.965 10:21:36 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.965 10:21:36 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:32.965 10:21:36 accel -- accel/accel.sh@41 -- # jq -r . 00:06:32.965 [2024-07-25 10:21:36.587947] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:32.965 [2024-07-25 10:21:36.588040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2307999 ] 00:06:33.223 [2024-07-25 10:21:36.674607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.223 [2024-07-25 10:21:36.792005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@864 -- # return 0 00:06:34.157 10:21:37 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:34.157 10:21:37 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:34.157 10:21:37 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:34.157 10:21:37 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:34.157 10:21:37 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:34.157 10:21:37 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.157 10:21:37 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.157 10:21:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.157 10:21:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.157 10:21:37 accel -- accel/accel.sh@75 -- # killprocess 2307999 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@950 -- # '[' -z 2307999 ']' 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@954 -- # kill -0 2307999 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@955 -- # uname 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2307999 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2307999' 00:06:34.157 killing process with pid 2307999 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@969 -- # kill 2307999 00:06:34.157 10:21:37 accel -- common/autotest_common.sh@974 -- # wait 2307999 00:06:34.415 10:21:38 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:34.415 10:21:38 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:34.415 10:21:38 accel -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:34.415 10:21:38 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.415 10:21:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.674 10:21:38 accel.accel_help -- common/autotest_common.sh@1125 -- # accel_perf -h 00:06:34.674 10:21:38 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:34.674 10:21:38 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:34.674 10:21:38 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.674 10:21:38 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.674 10:21:38 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.674 10:21:38 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.674 10:21:38 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.674 10:21:38 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:34.674 10:21:38 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:34.674 10:21:38 accel.accel_help -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.674 10:21:38 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:34.674 10:21:38 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:34.674 10:21:38 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:34.674 10:21:38 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.674 10:21:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.674 ************************************ 00:06:34.674 START TEST accel_missing_filename 00:06:34.674 ************************************ 00:06:34.674 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress 00:06:34.674 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # local es=0 00:06:34.674 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:34.674 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:34.674 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.674 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:34.674 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.674 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:34.674 10:21:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:34.674 10:21:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:34.674 10:21:38 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.674 10:21:38 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.674 10:21:38 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.674 10:21:38 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.674 10:21:38 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.674 10:21:38 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:34.674 10:21:38 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:34.674 [2024-07-25 10:21:38.226758] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:34.674 [2024-07-25 10:21:38.226822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308292 ] 00:06:34.674 [2024-07-25 10:21:38.311424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.932 [2024-07-25 10:21:38.432878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.932 [2024-07-25 10:21:38.509450] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.932 [2024-07-25 10:21:38.597262] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:35.190 A filename is required. 00:06:35.190 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # es=234 00:06:35.190 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.190 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=106 00:06:35.190 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@663 -- # case "$es" in 00:06:35.190 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@670 -- # es=1 00:06:35.190 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.190 00:06:35.190 real 0m0.522s 00:06:35.190 user 0m0.369s 00:06:35.190 sys 0m0.176s 00:06:35.190 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.190 10:21:38 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:35.190 ************************************ 00:06:35.190 END TEST accel_missing_filename 00:06:35.190 ************************************ 00:06:35.190 10:21:38 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:35.190 10:21:38 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:06:35.190 10:21:38 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.190 10:21:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.190 ************************************ 00:06:35.190 START TEST accel_compress_verify 00:06:35.190 ************************************ 00:06:35.190 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:35.190 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # local es=0 00:06:35.190 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:35.190 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:35.190 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.190 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:35.190 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.190 10:21:38 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:35.191 10:21:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:35.191 10:21:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:35.191 10:21:38 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.191 10:21:38 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.191 10:21:38 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.191 10:21:38 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.191 10:21:38 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.191 10:21:38 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:35.191 10:21:38 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:35.191 [2024-07-25 10:21:38.795113] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:35.191 [2024-07-25 10:21:38.795191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308322 ] 00:06:35.191 [2024-07-25 10:21:38.876773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.451 [2024-07-25 10:21:39.000137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.451 [2024-07-25 10:21:39.073423] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.451 [2024-07-25 10:21:39.153832] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:35.710 00:06:35.710 Compression does not support the verify option, aborting. 00:06:35.710 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # es=161 00:06:35.710 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.710 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=33 00:06:35.710 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@663 -- # case "$es" in 00:06:35.710 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@670 -- # es=1 00:06:35.710 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.710 00:06:35.710 real 0m0.510s 00:06:35.710 user 0m0.367s 00:06:35.710 sys 0m0.169s 00:06:35.710 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.710 10:21:39 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:35.710 ************************************ 00:06:35.710 END TEST accel_compress_verify 00:06:35.710 ************************************ 00:06:35.710 10:21:39 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:35.710 10:21:39 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:35.710 10:21:39 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.710 10:21:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.710 ************************************ 00:06:35.710 START TEST accel_wrong_workload 00:06:35.710 ************************************ 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w foobar 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # local es=0 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:35.710 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:35.710 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:35.710 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.710 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.710 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.710 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.710 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.710 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:35.710 10:21:39 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:35.710 Unsupported workload type: foobar 00:06:35.710 [2024-07-25 10:21:39.350832] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:35.710 accel_perf options: 00:06:35.710 [-h help message] 00:06:35.710 [-q queue depth per core] 00:06:35.710 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:35.710 [-T number of threads per core 00:06:35.710 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:35.710 [-t time in seconds] 00:06:35.710 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:35.710 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:35.710 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:35.710 [-l for compress/decompress workloads, name of uncompressed input file 00:06:35.710 [-S for crc32c workload, use this seed value (default 0) 00:06:35.710 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:35.710 [-f for fill workload, use this BYTE value (default 255) 00:06:35.710 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:35.710 [-y verify result if this switch is on] 00:06:35.710 [-a tasks to allocate per core (default: same value as -q)] 00:06:35.710 Can be used to spread operations across a wider range of memory. 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # es=1 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.710 00:06:35.710 real 0m0.030s 00:06:35.710 user 0m0.019s 00:06:35.710 sys 0m0.011s 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.710 10:21:39 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:35.710 ************************************ 00:06:35.710 END TEST accel_wrong_workload 00:06:35.710 ************************************ 00:06:35.710 Error: writing output failed: Broken pipe 00:06:35.710 10:21:39 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:35.710 10:21:39 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:06:35.710 10:21:39 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.710 10:21:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.710 ************************************ 00:06:35.710 START TEST accel_negative_buffers 00:06:35.710 ************************************ 00:06:35.710 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:35.710 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # local es=0 00:06:35.710 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:35.710 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:35.710 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.710 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:35.710 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.710 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:35.710 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:35.710 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:35.710 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.710 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.710 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.710 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.710 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.710 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:35.710 10:21:39 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:35.710 -x option must be non-negative. 00:06:35.710 [2024-07-25 10:21:39.417473] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:35.968 accel_perf options: 00:06:35.968 [-h help message] 00:06:35.968 [-q queue depth per core] 00:06:35.968 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:35.968 [-T number of threads per core 00:06:35.968 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:35.968 [-t time in seconds] 00:06:35.968 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:35.968 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:35.968 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:35.968 [-l for compress/decompress workloads, name of uncompressed input file 00:06:35.968 [-S for crc32c workload, use this seed value (default 0) 00:06:35.968 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:35.968 [-f for fill workload, use this BYTE value (default 255) 00:06:35.968 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:35.968 [-y verify result if this switch is on] 00:06:35.968 [-a tasks to allocate per core (default: same value as -q)] 00:06:35.968 Can be used to spread operations across a wider range of memory. 00:06:35.968 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # es=1 00:06:35.968 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.968 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.968 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.968 00:06:35.968 real 0m0.029s 00:06:35.968 user 0m0.015s 00:06:35.968 sys 0m0.014s 00:06:35.968 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.968 10:21:39 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:35.968 ************************************ 00:06:35.968 END TEST accel_negative_buffers 00:06:35.968 ************************************ 00:06:35.968 Error: writing output failed: Broken pipe 00:06:35.968 10:21:39 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:35.968 10:21:39 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:06:35.968 10:21:39 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.968 10:21:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.968 ************************************ 00:06:35.968 START TEST accel_crc32c 00:06:35.968 ************************************ 00:06:35.968 10:21:39 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:35.968 10:21:39 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:35.968 [2024-07-25 10:21:39.483742] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:35.968 [2024-07-25 10:21:39.483793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308512 ] 00:06:35.968 [2024-07-25 10:21:39.567790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.226 [2024-07-25 10:21:39.691248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.226 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.227 10:21:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:37.601 10:21:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.601 00:06:37.601 real 0m1.524s 00:06:37.601 user 0m0.010s 00:06:37.601 sys 0m0.002s 00:06:37.601 10:21:40 accel.accel_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.601 10:21:40 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:37.601 ************************************ 00:06:37.601 END TEST accel_crc32c 00:06:37.601 ************************************ 00:06:37.601 10:21:41 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:37.601 10:21:41 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:06:37.601 10:21:41 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.601 10:21:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.601 ************************************ 00:06:37.601 START TEST accel_crc32c_C2 00:06:37.601 ************************************ 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.601 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:37.601 [2024-07-25 10:21:41.054764] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:37.602 [2024-07-25 10:21:41.054823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308671 ] 00:06:37.602 [2024-07-25 10:21:41.138304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.602 [2024-07-25 10:21:41.260964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.860 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.861 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.861 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:37.861 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.861 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.861 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.861 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.861 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.861 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.861 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.861 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.861 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.861 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.861 10:21:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.235 00:06:39.235 real 0m1.529s 00:06:39.235 user 0m0.010s 00:06:39.235 sys 0m0.002s 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.235 10:21:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:39.235 ************************************ 00:06:39.235 END TEST accel_crc32c_C2 00:06:39.235 ************************************ 00:06:39.235 10:21:42 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:39.235 10:21:42 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:39.235 10:21:42 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.235 10:21:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.235 ************************************ 00:06:39.235 START TEST accel_copy 00:06:39.235 ************************************ 00:06:39.235 10:21:42 accel.accel_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy -y 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:39.235 [2024-07-25 10:21:42.626990] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:39.235 [2024-07-25 10:21:42.627049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308911 ] 00:06:39.235 [2024-07-25 10:21:42.708573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.235 [2024-07-25 10:21:42.830044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:39.235 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.236 10:21:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:40.610 10:21:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.610 00:06:40.610 real 0m1.501s 00:06:40.610 user 0m0.009s 00:06:40.610 sys 0m0.002s 00:06:40.610 10:21:44 accel.accel_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.610 10:21:44 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:40.610 ************************************ 00:06:40.610 END TEST accel_copy 00:06:40.610 ************************************ 00:06:40.610 10:21:44 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.610 10:21:44 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:40.610 10:21:44 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.610 10:21:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.610 ************************************ 00:06:40.610 START TEST accel_fill 00:06:40.610 ************************************ 00:06:40.610 10:21:44 accel.accel_fill -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:40.610 10:21:44 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:40.610 [2024-07-25 10:21:44.171704] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:40.610 [2024-07-25 10:21:44.171764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309106 ] 00:06:40.610 [2024-07-25 10:21:44.253141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.868 [2024-07-25 10:21:44.375592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.868 10:21:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.238 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.239 10:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.239 10:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.239 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.239 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.239 10:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.239 10:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.239 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.239 10:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.239 10:21:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.239 10:21:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:42.239 10:21:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.239 00:06:42.239 real 0m1.525s 00:06:42.239 user 0m0.011s 00:06:42.239 sys 0m0.002s 00:06:42.239 10:21:45 accel.accel_fill -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.239 10:21:45 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:42.239 ************************************ 00:06:42.239 END TEST accel_fill 00:06:42.239 ************************************ 00:06:42.239 10:21:45 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:42.239 10:21:45 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:42.239 10:21:45 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.239 10:21:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.239 ************************************ 00:06:42.239 START TEST accel_copy_crc32c 00:06:42.239 ************************************ 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:42.239 10:21:45 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:42.239 [2024-07-25 10:21:45.739678] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:42.239 [2024-07-25 10:21:45.739735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309259 ] 00:06:42.239 [2024-07-25 10:21:45.820189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.239 [2024-07-25 10:21:45.943269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.496 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.497 10:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.868 00:06:43.868 real 0m1.510s 00:06:43.868 user 0m0.010s 00:06:43.868 sys 0m0.003s 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.868 10:21:47 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:43.868 ************************************ 00:06:43.868 END TEST accel_copy_crc32c 00:06:43.868 ************************************ 00:06:43.868 10:21:47 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:43.868 10:21:47 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:06:43.868 10:21:47 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.868 10:21:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.868 ************************************ 00:06:43.868 START TEST accel_copy_crc32c_C2 00:06:43.868 ************************************ 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:43.868 [2024-07-25 10:21:47.293979] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:43.868 [2024-07-25 10:21:47.294038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309531 ] 00:06:43.868 [2024-07-25 10:21:47.376689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.868 [2024-07-25 10:21:47.496365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:43.868 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.869 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.126 10:21:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.498 00:06:45.498 real 0m1.508s 00:06:45.498 user 0m0.013s 00:06:45.498 sys 0m0.001s 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.498 10:21:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:45.498 ************************************ 00:06:45.498 END TEST accel_copy_crc32c_C2 00:06:45.498 ************************************ 00:06:45.498 10:21:48 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:45.498 10:21:48 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:45.498 10:21:48 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.498 10:21:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.498 ************************************ 00:06:45.498 START TEST accel_dualcast 00:06:45.498 ************************************ 00:06:45.498 10:21:48 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dualcast -y 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:45.498 10:21:48 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:45.498 [2024-07-25 10:21:48.844320] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:45.498 [2024-07-25 10:21:48.844394] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309694 ] 00:06:45.498 [2024-07-25 10:21:48.925594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.498 [2024-07-25 10:21:49.048192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.498 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.499 10:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:46.870 10:21:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.870 00:06:46.870 real 0m1.519s 00:06:46.870 user 0m0.009s 00:06:46.870 sys 0m0.003s 00:06:46.870 10:21:50 accel.accel_dualcast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.870 10:21:50 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:46.870 ************************************ 00:06:46.870 END TEST accel_dualcast 00:06:46.870 ************************************ 00:06:46.870 10:21:50 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:46.870 10:21:50 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:46.870 10:21:50 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.870 10:21:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.870 ************************************ 00:06:46.870 START TEST accel_compare 00:06:46.870 ************************************ 00:06:46.870 10:21:50 accel.accel_compare -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compare -y 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:46.870 10:21:50 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:46.870 [2024-07-25 10:21:50.415983] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:46.870 [2024-07-25 10:21:50.416043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309851 ] 00:06:46.870 [2024-07-25 10:21:50.499250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.128 [2024-07-25 10:21:50.620038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.128 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.129 10:21:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:48.520 10:21:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.520 00:06:48.520 real 0m1.523s 00:06:48.520 user 0m0.009s 00:06:48.520 sys 0m0.003s 00:06:48.520 10:21:51 accel.accel_compare -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.520 10:21:51 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:48.520 ************************************ 00:06:48.521 END TEST accel_compare 00:06:48.521 ************************************ 00:06:48.521 10:21:51 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:48.521 10:21:51 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:48.521 10:21:51 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.521 10:21:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.521 ************************************ 00:06:48.521 START TEST accel_xor 00:06:48.521 ************************************ 00:06:48.521 10:21:51 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:48.521 10:21:51 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:48.521 [2024-07-25 10:21:51.983818] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:48.521 [2024-07-25 10:21:51.983889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310123 ] 00:06:48.521 [2024-07-25 10:21:52.059135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.521 [2024-07-25 10:21:52.176174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.779 10:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.151 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.151 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.151 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.151 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.151 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.151 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.151 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.151 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.152 00:06:50.152 real 0m1.514s 00:06:50.152 user 0m0.012s 00:06:50.152 sys 0m0.000s 00:06:50.152 10:21:53 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.152 10:21:53 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:50.152 ************************************ 00:06:50.152 END TEST accel_xor 00:06:50.152 ************************************ 00:06:50.152 10:21:53 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:50.152 10:21:53 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:06:50.152 10:21:53 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.152 10:21:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.152 ************************************ 00:06:50.152 START TEST accel_xor 00:06:50.152 ************************************ 00:06:50.152 10:21:53 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y -x 3 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:50.152 [2024-07-25 10:21:53.542915] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:50.152 [2024-07-25 10:21:53.542975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310288 ] 00:06:50.152 [2024-07-25 10:21:53.627918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.152 [2024-07-25 10:21:53.751007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.152 10:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.524 10:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.524 10:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.524 10:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.524 10:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.524 10:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.524 10:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:51.525 10:21:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.525 00:06:51.525 real 0m1.528s 00:06:51.525 user 0m1.354s 00:06:51.525 sys 0m0.166s 00:06:51.525 10:21:55 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.525 10:21:55 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:51.525 ************************************ 00:06:51.525 END TEST accel_xor 00:06:51.525 ************************************ 00:06:51.525 10:21:55 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:51.525 10:21:55 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:51.525 10:21:55 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.525 10:21:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.525 ************************************ 00:06:51.525 START TEST accel_dif_verify 00:06:51.525 ************************************ 00:06:51.525 10:21:55 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_verify 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:51.525 10:21:55 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:51.525 [2024-07-25 10:21:55.120322] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:51.525 [2024-07-25 10:21:55.120382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310505 ] 00:06:51.525 [2024-07-25 10:21:55.203611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.783 [2024-07-25 10:21:55.327434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.783 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.783 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.783 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.783 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.783 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.783 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.783 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.783 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.784 10:21:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:53.157 10:21:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.157 00:06:53.157 real 0m1.524s 00:06:53.157 user 0m0.009s 00:06:53.157 sys 0m0.003s 00:06:53.157 10:21:56 accel.accel_dif_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.157 10:21:56 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:53.157 ************************************ 00:06:53.157 END TEST accel_dif_verify 00:06:53.157 ************************************ 00:06:53.157 10:21:56 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:53.157 10:21:56 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:53.157 10:21:56 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.157 10:21:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.157 ************************************ 00:06:53.157 START TEST accel_dif_generate 00:06:53.157 ************************************ 00:06:53.158 10:21:56 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:53.158 10:21:56 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:53.158 [2024-07-25 10:21:56.687828] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:53.158 [2024-07-25 10:21:56.687883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310723 ] 00:06:53.158 [2024-07-25 10:21:56.771727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.416 [2024-07-25 10:21:56.894678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.416 10:21:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:54.789 10:21:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.789 00:06:54.789 real 0m1.529s 00:06:54.789 user 0m1.350s 00:06:54.789 sys 0m0.172s 00:06:54.789 10:21:58 accel.accel_dif_generate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.789 10:21:58 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:54.789 ************************************ 00:06:54.789 END TEST accel_dif_generate 00:06:54.789 ************************************ 00:06:54.789 10:21:58 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:54.789 10:21:58 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:54.789 10:21:58 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.789 10:21:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.789 ************************************ 00:06:54.789 START TEST accel_dif_generate_copy 00:06:54.789 ************************************ 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate_copy 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:54.789 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:54.789 [2024-07-25 10:21:58.260720] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:54.789 [2024-07-25 10:21:58.260780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310883 ] 00:06:54.789 [2024-07-25 10:21:58.342060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.789 [2024-07-25 10:21:58.464589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.048 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.049 10:21:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.423 00:06:56.423 real 0m1.516s 00:06:56.423 user 0m0.009s 00:06:56.423 sys 0m0.002s 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.423 10:21:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:56.423 ************************************ 00:06:56.423 END TEST accel_dif_generate_copy 00:06:56.423 ************************************ 00:06:56.423 10:21:59 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:56.423 10:21:59 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:56.423 10:21:59 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:56.423 10:21:59 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.423 10:21:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.423 ************************************ 00:06:56.423 START TEST accel_comp 00:06:56.423 ************************************ 00:06:56.423 10:21:59 accel.accel_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:56.423 10:21:59 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:56.423 10:21:59 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:56.423 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.423 10:21:59 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:56.423 10:21:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.423 10:21:59 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:56.424 10:21:59 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:56.424 10:21:59 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.424 10:21:59 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.424 10:21:59 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.424 10:21:59 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.424 10:21:59 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.424 10:21:59 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:56.424 10:21:59 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:56.424 [2024-07-25 10:21:59.828894] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:56.424 [2024-07-25 10:21:59.828953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311149 ] 00:06:56.424 [2024-07-25 10:21:59.912309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.424 [2024-07-25 10:22:00.035276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.424 10:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.796 10:22:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.797 10:22:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.797 10:22:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.797 10:22:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.797 10:22:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.797 10:22:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.797 10:22:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:57.797 10:22:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.797 00:06:57.797 real 0m1.524s 00:06:57.797 user 0m1.358s 00:06:57.797 sys 0m0.157s 00:06:57.797 10:22:01 accel.accel_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.797 10:22:01 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:57.797 ************************************ 00:06:57.797 END TEST accel_comp 00:06:57.797 ************************************ 00:06:57.797 10:22:01 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:57.797 10:22:01 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:06:57.797 10:22:01 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.797 10:22:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.797 ************************************ 00:06:57.797 START TEST accel_decomp 00:06:57.797 ************************************ 00:06:57.797 10:22:01 accel.accel_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:57.797 10:22:01 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:57.797 [2024-07-25 10:22:01.400878] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:57.797 [2024-07-25 10:22:01.400939] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311318 ] 00:06:57.797 [2024-07-25 10:22:01.485733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.055 [2024-07-25 10:22:01.608463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:58.055 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.056 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.056 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.056 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.056 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.056 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.056 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.056 10:22:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.056 10:22:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.056 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.056 10:22:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:59.430 10:22:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.430 00:06:59.430 real 0m1.532s 00:06:59.430 user 0m0.011s 00:06:59.430 sys 0m0.001s 00:06:59.430 10:22:02 accel.accel_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.430 10:22:02 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:59.430 ************************************ 00:06:59.430 END TEST accel_decomp 00:06:59.430 ************************************ 00:06:59.430 10:22:02 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:59.430 10:22:02 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:06:59.430 10:22:02 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.430 10:22:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.430 ************************************ 00:06:59.430 START TEST accel_decomp_full 00:06:59.430 ************************************ 00:06:59.430 10:22:02 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:59.430 10:22:02 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:59.430 [2024-07-25 10:22:02.972679] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:59.430 [2024-07-25 10:22:02.972738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311472 ] 00:06:59.430 [2024-07-25 10:22:03.055550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.688 [2024-07-25 10:22:03.177712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.688 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.689 10:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.061 10:22:04 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.061 00:07:01.061 real 0m1.543s 00:07:01.061 user 0m0.009s 00:07:01.061 sys 0m0.002s 00:07:01.061 10:22:04 accel.accel_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.061 10:22:04 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:01.061 ************************************ 00:07:01.061 END TEST accel_decomp_full 00:07:01.061 ************************************ 00:07:01.061 10:22:04 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:01.061 10:22:04 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:01.061 10:22:04 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.061 10:22:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.061 ************************************ 00:07:01.061 START TEST accel_decomp_mcore 00:07:01.061 ************************************ 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:01.062 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:01.062 [2024-07-25 10:22:04.557621] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:01.062 [2024-07-25 10:22:04.557681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311744 ] 00:07:01.062 [2024-07-25 10:22:04.638987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.062 [2024-07-25 10:22:04.764979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.062 [2024-07-25 10:22:04.765024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.062 [2024-07-25 10:22:04.765184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.062 [2024-07-25 10:22:04.765188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.320 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.321 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.321 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.321 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.321 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.321 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.321 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.321 10:22:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.693 00:07:02.693 real 0m1.524s 00:07:02.693 user 0m4.842s 00:07:02.693 sys 0m0.172s 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.693 10:22:06 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:02.693 ************************************ 00:07:02.693 END TEST accel_decomp_mcore 00:07:02.693 ************************************ 00:07:02.693 10:22:06 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.693 10:22:06 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:02.693 10:22:06 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.693 10:22:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.693 ************************************ 00:07:02.693 START TEST accel_decomp_full_mcore 00:07:02.693 ************************************ 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:02.693 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:02.693 [2024-07-25 10:22:06.127957] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:02.693 [2024-07-25 10:22:06.128017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311909 ] 00:07:02.694 [2024-07-25 10:22:06.210529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.694 [2024-07-25 10:22:06.336000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.694 [2024-07-25 10:22:06.336068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.694 [2024-07-25 10:22:06.336161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.694 [2024-07-25 10:22:06.336164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.952 10:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.326 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.326 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.326 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.326 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.326 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.326 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.327 00:07:04.327 real 0m1.545s 00:07:04.327 user 0m4.900s 00:07:04.327 sys 0m0.180s 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.327 10:22:07 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:04.327 ************************************ 00:07:04.327 END TEST accel_decomp_full_mcore 00:07:04.327 ************************************ 00:07:04.327 10:22:07 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.327 10:22:07 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:04.327 10:22:07 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.327 10:22:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.327 ************************************ 00:07:04.327 START TEST accel_decomp_mthread 00:07:04.327 ************************************ 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:04.327 [2024-07-25 10:22:07.715766] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:04.327 [2024-07-25 10:22:07.715820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312181 ] 00:07:04.327 [2024-07-25 10:22:07.797735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.327 [2024-07-25 10:22:07.920363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:04.327 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.328 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.328 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.328 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.328 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.328 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.328 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.328 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.328 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.328 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.328 10:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.700 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.701 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.701 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.701 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.701 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.701 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.701 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.701 10:22:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.701 00:07:05.701 real 0m1.529s 00:07:05.701 user 0m0.014s 00:07:05.701 sys 0m0.000s 00:07:05.701 10:22:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.701 10:22:09 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:05.701 ************************************ 00:07:05.701 END TEST accel_decomp_mthread 00:07:05.701 ************************************ 00:07:05.701 10:22:09 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.701 10:22:09 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:05.701 10:22:09 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.701 10:22:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.701 ************************************ 00:07:05.701 START TEST accel_decomp_full_mthread 00:07:05.701 ************************************ 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:05.701 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:05.701 [2024-07-25 10:22:09.297974] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:05.701 [2024-07-25 10:22:09.298032] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312342 ] 00:07:05.701 [2024-07-25 10:22:09.382154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.959 [2024-07-25 10:22:09.503676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.959 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.960 10:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.334 00:07:07.334 real 0m1.548s 00:07:07.334 user 0m1.374s 00:07:07.334 sys 0m0.165s 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.334 10:22:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:07.334 ************************************ 00:07:07.334 END TEST accel_decomp_full_mthread 00:07:07.334 ************************************ 00:07:07.334 10:22:10 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:07:07.334 10:22:10 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:07:07.334 10:22:10 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:07:07.334 10:22:10 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:07.334 10:22:10 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2312502 00:07:07.334 10:22:10 accel -- accel/accel.sh@63 -- # waitforlisten 2312502 00:07:07.334 10:22:10 accel -- common/autotest_common.sh@831 -- # '[' -z 2312502 ']' 00:07:07.334 10:22:10 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:07.334 10:22:10 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.334 10:22:10 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:07.335 10:22:10 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.335 10:22:10 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.335 10:22:10 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.335 10:22:10 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.335 10:22:10 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.335 10:22:10 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.335 10:22:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.335 10:22:10 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.335 10:22:10 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:07.335 10:22:10 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:07.335 10:22:10 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:07.335 10:22:10 accel -- accel/accel.sh@41 -- # jq -r . 00:07:07.335 [2024-07-25 10:22:10.908809] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:07.335 [2024-07-25 10:22:10.908893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312502 ] 00:07:07.335 [2024-07-25 10:22:10.982989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.608 [2024-07-25 10:22:11.096715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.217 [2024-07-25 10:22:11.822535] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:08.475 10:22:12 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.475 10:22:12 accel -- common/autotest_common.sh@864 -- # return 0 00:07:08.475 10:22:12 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:08.475 10:22:12 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:08.475 10:22:12 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:08.475 10:22:12 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:07:08.475 10:22:12 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:07:08.475 10:22:12 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:07:08.475 10:22:12 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.475 10:22:12 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:07:08.475 10:22:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.475 10:22:12 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:07:08.475 10:22:12 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.475 "method": "compressdev_scan_accel_module", 00:07:08.475 10:22:12 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:08.475 10:22:12 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:08.475 10:22:12 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.475 10:22:12 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:08.475 10:22:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.733 10:22:12 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.733 10:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.733 10:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.733 10:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.733 10:22:12 accel -- accel/accel.sh@75 -- # killprocess 2312502 00:07:08.733 10:22:12 accel -- common/autotest_common.sh@950 -- # '[' -z 2312502 ']' 00:07:08.733 10:22:12 accel -- common/autotest_common.sh@954 -- # kill -0 2312502 00:07:08.733 10:22:12 accel -- common/autotest_common.sh@955 -- # uname 00:07:08.733 10:22:12 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.733 10:22:12 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2312502 00:07:08.733 10:22:12 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.733 10:22:12 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.733 10:22:12 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2312502' 00:07:08.733 killing process with pid 2312502 00:07:08.733 10:22:12 accel -- common/autotest_common.sh@969 -- # kill 2312502 00:07:08.733 10:22:12 accel -- common/autotest_common.sh@974 -- # wait 2312502 00:07:09.300 10:22:12 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:09.300 10:22:12 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:09.300 10:22:12 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:09.300 10:22:12 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.300 10:22:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.300 ************************************ 00:07:09.300 START TEST accel_cdev_comp 00:07:09.300 ************************************ 00:07:09.300 10:22:12 accel.accel_cdev_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:09.300 10:22:12 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:07:09.300 [2024-07-25 10:22:12.797680] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:09.300 [2024-07-25 10:22:12.797736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312783 ] 00:07:09.300 [2024-07-25 10:22:12.879239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.300 [2024-07-25 10:22:12.992115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.248 [2024-07-25 10:22:13.711403] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:10.248 [2024-07-25 10:22:13.713992] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x10c8d40 PMD being used: compress_qat 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.248 [2024-07-25 10:22:13.717642] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x10cdb40 PMD being used: compress_qat 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:10.248 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.249 10:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:11.623 10:22:14 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:11.623 00:07:11.623 real 0m2.180s 00:07:11.623 user 0m0.012s 00:07:11.623 sys 0m0.002s 00:07:11.623 10:22:14 accel.accel_cdev_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.623 10:22:14 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:07:11.623 ************************************ 00:07:11.623 END TEST accel_cdev_comp 00:07:11.623 ************************************ 00:07:11.623 10:22:14 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:11.623 10:22:14 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:11.623 10:22:14 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.623 10:22:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.623 ************************************ 00:07:11.623 START TEST accel_cdev_decomp 00:07:11.623 ************************************ 00:07:11.623 10:22:14 accel.accel_cdev_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:11.623 10:22:14 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:11.623 10:22:14 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:11.623 10:22:14 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.623 10:22:14 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:11.623 10:22:14 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.623 10:22:14 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:11.623 10:22:14 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:11.623 10:22:14 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.623 10:22:14 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.623 10:22:14 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.623 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.623 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:11.623 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:11.623 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:11.623 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:11.623 [2024-07-25 10:22:15.020921] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:11.623 [2024-07-25 10:22:15.020992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313067 ] 00:07:11.623 [2024-07-25 10:22:15.102289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.623 [2024-07-25 10:22:15.219992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.558 [2024-07-25 10:22:15.937285] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:12.558 [2024-07-25 10:22:15.939756] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x221bd40 PMD being used: compress_qat 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 [2024-07-25 10:22:15.943514] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2220b40 PMD being used: compress_qat 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 10:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.490 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.491 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:13.491 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:13.491 10:22:17 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:13.491 00:07:13.491 real 0m2.184s 00:07:13.491 user 0m0.010s 00:07:13.491 sys 0m0.004s 00:07:13.491 10:22:17 accel.accel_cdev_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.491 10:22:17 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:13.491 ************************************ 00:07:13.491 END TEST accel_cdev_decomp 00:07:13.491 ************************************ 00:07:13.749 10:22:17 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:13.749 10:22:17 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:13.749 10:22:17 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.749 10:22:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.749 ************************************ 00:07:13.749 START TEST accel_cdev_decomp_full 00:07:13.749 ************************************ 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:13.749 10:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:13.749 [2024-07-25 10:22:17.247824] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:13.749 [2024-07-25 10:22:17.247883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313348 ] 00:07:13.749 [2024-07-25 10:22:17.330488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.750 [2024-07-25 10:22:17.454995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.684 [2024-07-25 10:22:18.179820] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:14.684 [2024-07-25 10:22:18.182345] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1ba3d40 PMD being used: compress_qat 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 [2024-07-25 10:22:18.185308] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1ba7070 PMD being used: compress_qat 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.684 10:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:16.056 00:07:16.056 real 0m2.203s 00:07:16.056 user 0m0.013s 00:07:16.056 sys 0m0.000s 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.056 10:22:19 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:16.056 ************************************ 00:07:16.056 END TEST accel_cdev_decomp_full 00:07:16.056 ************************************ 00:07:16.056 10:22:19 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.056 10:22:19 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:16.056 10:22:19 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.056 10:22:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.056 ************************************ 00:07:16.056 START TEST accel_cdev_decomp_mcore 00:07:16.056 ************************************ 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:16.056 10:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:16.056 [2024-07-25 10:22:19.500548] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:16.056 [2024-07-25 10:22:19.500606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313637 ] 00:07:16.056 [2024-07-25 10:22:19.582852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.056 [2024-07-25 10:22:19.707510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.056 [2024-07-25 10:22:19.707576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.056 [2024-07-25 10:22:19.707673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.056 [2024-07-25 10:22:19.707676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.991 [2024-07-25 10:22:20.366858] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:16.991 [2024-07-25 10:22:20.369157] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x225e3e0 PMD being used: compress_qat 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.991 [2024-07-25 10:22:20.374494] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f586c19b8b0 PMD being used: compress_qat 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.991 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.992 [2024-07-25 10:22:20.375896] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f586419b8b0 PMD being used: compress_qat 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.992 [2024-07-25 10:22:20.376493] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x22636b0 PMD being used: compress_qat 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:16.992 [2024-07-25 10:22:20.376645] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f585c19b8b0 PMD being used: compress_qat 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.992 10:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.924 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:17.925 00:07:17.925 real 0m2.148s 00:07:17.925 user 0m6.936s 00:07:17.925 sys 0m0.525s 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.925 10:22:21 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:17.925 ************************************ 00:07:17.925 END TEST accel_cdev_decomp_mcore 00:07:17.925 ************************************ 00:07:18.183 10:22:21 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:18.183 10:22:21 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:18.183 10:22:21 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.183 10:22:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.183 ************************************ 00:07:18.183 START TEST accel_cdev_decomp_full_mcore 00:07:18.183 ************************************ 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:18.183 10:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:18.183 [2024-07-25 10:22:21.695253] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:18.183 [2024-07-25 10:22:21.695312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313923 ] 00:07:18.183 [2024-07-25 10:22:21.777555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.441 [2024-07-25 10:22:21.901962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.441 [2024-07-25 10:22:21.902029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.441 [2024-07-25 10:22:21.902126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.441 [2024-07-25 10:22:21.902130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.006 [2024-07-25 10:22:22.554757] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:19.006 [2024-07-25 10:22:22.557023] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x20623e0 PMD being used: compress_qat 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 [2024-07-25 10:22:22.561598] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc6c019b8b0 PMD being used: compress_qat 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 [2024-07-25 10:22:22.562865] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc6b819b8b0 PMD being used: compress_qat 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 [2024-07-25 10:22:22.563510] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x20678e0 PMD being used: compress_qat 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:19.006 [2024-07-25 10:22:22.563675] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc6b019b8b0 PMD being used: compress_qat 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.006 10:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:20.376 00:07:20.376 real 0m2.186s 00:07:20.376 user 0m7.106s 00:07:20.376 sys 0m0.528s 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.376 10:22:23 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:20.376 ************************************ 00:07:20.376 END TEST accel_cdev_decomp_full_mcore 00:07:20.376 ************************************ 00:07:20.376 10:22:23 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:20.376 10:22:23 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:20.376 10:22:23 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.376 10:22:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.376 ************************************ 00:07:20.376 START TEST accel_cdev_decomp_mthread 00:07:20.376 ************************************ 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:20.376 10:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:20.376 [2024-07-25 10:22:23.925815] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:20.376 [2024-07-25 10:22:23.925875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314210 ] 00:07:20.376 [2024-07-25 10:22:24.005890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.633 [2024-07-25 10:22:24.127183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.199 [2024-07-25 10:22:24.815304] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:21.199 [2024-07-25 10:22:24.817715] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1ca9d40 PMD being used: compress_qat 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 [2024-07-25 10:22:24.821898] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1caef40 PMD being used: compress_qat 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 [2024-07-25 10:22:24.824309] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1dd1dd0 PMD being used: compress_qat 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.199 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.200 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.200 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.200 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.200 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.200 10:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:22.572 00:07:22.572 real 0m2.148s 00:07:22.572 user 0m1.619s 00:07:22.572 sys 0m0.516s 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.572 10:22:26 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:22.572 ************************************ 00:07:22.572 END TEST accel_cdev_decomp_mthread 00:07:22.572 ************************************ 00:07:22.572 10:22:26 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:22.572 10:22:26 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:22.572 10:22:26 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.572 10:22:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.572 ************************************ 00:07:22.572 START TEST accel_cdev_decomp_full_mthread 00:07:22.572 ************************************ 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:22.572 10:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:22.572 [2024-07-25 10:22:26.119824] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:22.572 [2024-07-25 10:22:26.119884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314492 ] 00:07:22.572 [2024-07-25 10:22:26.201510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.830 [2024-07-25 10:22:26.324349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.395 [2024-07-25 10:22:27.027064] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:23.396 [2024-07-25 10:22:27.029460] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xdb5d40 PMD being used: compress_qat 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 [2024-07-25 10:22:27.032720] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xdb5de0 PMD being used: compress_qat 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:23.396 [2024-07-25 10:22:27.035402] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xfba9d0 PMD being used: compress_qat 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.396 10:22:27 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:24.767 00:07:24.767 real 0m2.184s 00:07:24.767 user 0m0.013s 00:07:24.767 sys 0m0.001s 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.767 10:22:28 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:24.767 ************************************ 00:07:24.767 END TEST accel_cdev_decomp_full_mthread 00:07:24.767 ************************************ 00:07:24.767 10:22:28 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:07:24.767 10:22:28 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:24.767 10:22:28 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:24.767 10:22:28 accel -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:24.767 10:22:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.767 10:22:28 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.767 10:22:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.767 10:22:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.767 10:22:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.767 10:22:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.767 10:22:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.767 10:22:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:24.767 10:22:28 accel -- accel/accel.sh@41 -- # jq -r . 00:07:24.767 ************************************ 00:07:24.767 START TEST accel_dif_functional_tests 00:07:24.767 ************************************ 00:07:24.767 10:22:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:24.767 [2024-07-25 10:22:28.371011] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:24.767 [2024-07-25 10:22:28.371069] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314777 ] 00:07:24.767 [2024-07-25 10:22:28.451781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.025 [2024-07-25 10:22:28.578487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.025 [2024-07-25 10:22:28.578553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.025 [2024-07-25 10:22:28.578556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.025 00:07:25.025 00:07:25.025 CUnit - A unit testing framework for C - Version 2.1-3 00:07:25.025 http://cunit.sourceforge.net/ 00:07:25.025 00:07:25.025 00:07:25.025 Suite: accel_dif 00:07:25.025 Test: verify: DIF generated, GUARD check ...passed 00:07:25.025 Test: verify: DIF generated, APPTAG check ...passed 00:07:25.025 Test: verify: DIF generated, REFTAG check ...passed 00:07:25.025 Test: verify: DIF not generated, GUARD check ...[2024-07-25 10:22:28.687107] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:25.025 passed 00:07:25.025 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 10:22:28.687205] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:25.025 passed 00:07:25.025 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 10:22:28.687243] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:25.025 passed 00:07:25.025 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:25.025 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 10:22:28.687304] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:25.025 passed 00:07:25.025 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:25.025 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:25.025 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:25.025 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 10:22:28.687454] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:25.025 passed 00:07:25.025 Test: verify copy: DIF generated, GUARD check ...passed 00:07:25.025 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:25.025 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:25.025 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 10:22:28.687634] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:25.025 passed 00:07:25.025 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 10:22:28.687669] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:25.025 passed 00:07:25.025 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 10:22:28.687701] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:25.025 passed 00:07:25.025 Test: generate copy: DIF generated, GUARD check ...passed 00:07:25.025 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:25.025 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:25.025 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:25.025 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:25.025 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:25.025 Test: generate copy: iovecs-len validate ...[2024-07-25 10:22:28.687912] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:25.025 passed 00:07:25.025 Test: generate copy: buffer alignment validate ...passed 00:07:25.025 00:07:25.025 Run Summary: Type Total Ran Passed Failed Inactive 00:07:25.025 suites 1 1 n/a 0 0 00:07:25.025 tests 26 26 26 0 0 00:07:25.025 asserts 115 115 115 0 n/a 00:07:25.025 00:07:25.025 Elapsed time = 0.003 seconds 00:07:25.282 00:07:25.282 real 0m0.626s 00:07:25.282 user 0m0.907s 00:07:25.282 sys 0m0.193s 00:07:25.282 10:22:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.282 10:22:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:25.282 ************************************ 00:07:25.282 END TEST accel_dif_functional_tests 00:07:25.282 ************************************ 00:07:25.282 00:07:25.282 real 0m52.497s 00:07:25.282 user 1m1.992s 00:07:25.282 sys 0m9.958s 00:07:25.282 10:22:28 accel -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.282 10:22:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.282 ************************************ 00:07:25.282 END TEST accel 00:07:25.282 ************************************ 00:07:25.539 10:22:28 -- spdk/autotest.sh@186 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:25.539 10:22:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.539 10:22:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.539 10:22:28 -- common/autotest_common.sh@10 -- # set +x 00:07:25.539 ************************************ 00:07:25.539 START TEST accel_rpc 00:07:25.539 ************************************ 00:07:25.539 10:22:29 accel_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:25.539 * Looking for test storage... 00:07:25.539 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:07:25.539 10:22:29 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:25.539 10:22:29 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2314849 00:07:25.539 10:22:29 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:25.539 10:22:29 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2314849 00:07:25.539 10:22:29 accel_rpc -- common/autotest_common.sh@831 -- # '[' -z 2314849 ']' 00:07:25.539 10:22:29 accel_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.539 10:22:29 accel_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.539 10:22:29 accel_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.539 10:22:29 accel_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.539 10:22:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.539 [2024-07-25 10:22:29.135027] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:25.539 [2024-07-25 10:22:29.135133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314849 ] 00:07:25.539 [2024-07-25 10:22:29.212227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.796 [2024-07-25 10:22:29.324316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:26.791 10:22:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:26.791 10:22:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:26.791 10:22:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:26.791 10:22:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:26.791 10:22:30 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.791 ************************************ 00:07:26.791 START TEST accel_assign_opcode 00:07:26.791 ************************************ 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # accel_assign_opcode_test_suite 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.791 [2024-07-25 10:22:30.110743] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.791 [2024-07-25 10:22:30.118760] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.791 software 00:07:26.791 00:07:26.791 real 0m0.318s 00:07:26.791 user 0m0.035s 00:07:26.791 sys 0m0.006s 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.791 10:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.791 ************************************ 00:07:26.791 END TEST accel_assign_opcode 00:07:26.791 ************************************ 00:07:26.791 10:22:30 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2314849 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@950 -- # '[' -z 2314849 ']' 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@954 -- # kill -0 2314849 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@955 -- # uname 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2314849 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2314849' 00:07:26.791 killing process with pid 2314849 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@969 -- # kill 2314849 00:07:26.791 10:22:30 accel_rpc -- common/autotest_common.sh@974 -- # wait 2314849 00:07:27.356 00:07:27.356 real 0m1.924s 00:07:27.356 user 0m2.004s 00:07:27.356 sys 0m0.502s 00:07:27.356 10:22:30 accel_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.356 10:22:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.356 ************************************ 00:07:27.356 END TEST accel_rpc 00:07:27.356 ************************************ 00:07:27.356 10:22:30 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.356 10:22:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.356 10:22:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.356 10:22:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.356 ************************************ 00:07:27.356 START TEST app_cmdline 00:07:27.356 ************************************ 00:07:27.356 10:22:30 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.356 * Looking for test storage... 00:07:27.356 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:27.356 10:22:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:27.356 10:22:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2315190 00:07:27.356 10:22:31 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:27.356 10:22:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2315190 00:07:27.356 10:22:31 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2315190 ']' 00:07:27.356 10:22:31 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.356 10:22:31 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.356 10:22:31 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.356 10:22:31 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.356 10:22:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.614 [2024-07-25 10:22:31.109169] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:27.614 [2024-07-25 10:22:31.109257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315190 ] 00:07:27.614 [2024-07-25 10:22:31.191225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.614 [2024-07-25 10:22:31.311303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.180 10:22:31 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.180 10:22:31 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:28.180 10:22:31 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:28.180 { 00:07:28.180 "version": "SPDK v24.09-pre git sha1 6f18624d4", 00:07:28.180 "fields": { 00:07:28.180 "major": 24, 00:07:28.180 "minor": 9, 00:07:28.180 "patch": 0, 00:07:28.180 "suffix": "-pre", 00:07:28.180 "commit": "6f18624d4" 00:07:28.180 } 00:07:28.180 } 00:07:28.180 10:22:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:28.180 10:22:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:28.180 10:22:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:28.180 10:22:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:28.180 10:22:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:28.180 10:22:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:28.180 10:22:31 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.180 10:22:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:28.180 10:22:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.180 10:22:31 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.180 10:22:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:28.180 10:22:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:28.180 10:22:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.181 10:22:31 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:28.181 10:22:31 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.181 10:22:31 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:28.181 10:22:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.181 10:22:31 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:28.181 10:22:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.181 10:22:31 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:28.181 10:22:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.181 10:22:31 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:28.181 10:22:31 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:07:28.181 10:22:31 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.439 request: 00:07:28.439 { 00:07:28.439 "method": "env_dpdk_get_mem_stats", 00:07:28.439 "req_id": 1 00:07:28.439 } 00:07:28.439 Got JSON-RPC error response 00:07:28.439 response: 00:07:28.439 { 00:07:28.439 "code": -32601, 00:07:28.439 "message": "Method not found" 00:07:28.439 } 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.439 10:22:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2315190 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2315190 ']' 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2315190 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2315190 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2315190' 00:07:28.439 killing process with pid 2315190 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@969 -- # kill 2315190 00:07:28.439 10:22:32 app_cmdline -- common/autotest_common.sh@974 -- # wait 2315190 00:07:29.005 00:07:29.005 real 0m1.632s 00:07:29.005 user 0m1.955s 00:07:29.005 sys 0m0.471s 00:07:29.005 10:22:32 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.005 10:22:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.005 ************************************ 00:07:29.005 END TEST app_cmdline 00:07:29.005 ************************************ 00:07:29.005 10:22:32 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:29.005 10:22:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.005 10:22:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.005 10:22:32 -- common/autotest_common.sh@10 -- # set +x 00:07:29.005 ************************************ 00:07:29.005 START TEST version 00:07:29.005 ************************************ 00:07:29.005 10:22:32 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:29.264 * Looking for test storage... 00:07:29.264 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:29.264 10:22:32 version -- app/version.sh@17 -- # get_header_version major 00:07:29.264 10:22:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:29.264 10:22:32 version -- app/version.sh@14 -- # cut -f2 00:07:29.264 10:22:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.264 10:22:32 version -- app/version.sh@17 -- # major=24 00:07:29.264 10:22:32 version -- app/version.sh@18 -- # get_header_version minor 00:07:29.264 10:22:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:29.264 10:22:32 version -- app/version.sh@14 -- # cut -f2 00:07:29.264 10:22:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.264 10:22:32 version -- app/version.sh@18 -- # minor=9 00:07:29.264 10:22:32 version -- app/version.sh@19 -- # get_header_version patch 00:07:29.264 10:22:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:29.264 10:22:32 version -- app/version.sh@14 -- # cut -f2 00:07:29.264 10:22:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.264 10:22:32 version -- app/version.sh@19 -- # patch=0 00:07:29.264 10:22:32 version -- app/version.sh@20 -- # get_header_version suffix 00:07:29.264 10:22:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:29.264 10:22:32 version -- app/version.sh@14 -- # cut -f2 00:07:29.264 10:22:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.264 10:22:32 version -- app/version.sh@20 -- # suffix=-pre 00:07:29.264 10:22:32 version -- app/version.sh@22 -- # version=24.9 00:07:29.264 10:22:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:29.264 10:22:32 version -- app/version.sh@28 -- # version=24.9rc0 00:07:29.264 10:22:32 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:07:29.264 10:22:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:29.264 10:22:32 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:29.264 10:22:32 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:29.264 00:07:29.264 real 0m0.098s 00:07:29.264 user 0m0.046s 00:07:29.264 sys 0m0.073s 00:07:29.264 10:22:32 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.264 10:22:32 version -- common/autotest_common.sh@10 -- # set +x 00:07:29.264 ************************************ 00:07:29.264 END TEST version 00:07:29.264 ************************************ 00:07:29.264 10:22:32 -- spdk/autotest.sh@192 -- # '[' 1 -eq 1 ']' 00:07:29.264 10:22:32 -- spdk/autotest.sh@193 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:29.264 10:22:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.264 10:22:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.264 10:22:32 -- common/autotest_common.sh@10 -- # set +x 00:07:29.264 ************************************ 00:07:29.264 START TEST blockdev_general 00:07:29.264 ************************************ 00:07:29.264 10:22:32 blockdev_general -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:29.264 * Looking for test storage... 00:07:29.264 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:29.264 10:22:32 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:07:29.264 10:22:32 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:07:29.265 10:22:32 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:29.265 10:22:32 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2315571 00:07:29.265 10:22:32 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:07:29.265 10:22:32 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:29.265 10:22:32 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 2315571 00:07:29.265 10:22:32 blockdev_general -- common/autotest_common.sh@831 -- # '[' -z 2315571 ']' 00:07:29.265 10:22:32 blockdev_general -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.265 10:22:32 blockdev_general -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.265 10:22:32 blockdev_general -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.265 10:22:32 blockdev_general -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.265 10:22:32 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:29.265 [2024-07-25 10:22:32.924705] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:29.265 [2024-07-25 10:22:32.924773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315571 ] 00:07:29.523 [2024-07-25 10:22:33.008212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.523 [2024-07-25 10:22:33.127293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.457 10:22:33 blockdev_general -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.457 10:22:33 blockdev_general -- common/autotest_common.sh@864 -- # return 0 00:07:30.457 10:22:33 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:30.457 10:22:33 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:07:30.457 10:22:33 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:07:30.457 10:22:33 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.457 10:22:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:30.457 [2024-07-25 10:22:34.138059] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:30.457 [2024-07-25 10:22:34.138134] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:30.457 00:07:30.457 [2024-07-25 10:22:34.146047] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:30.457 [2024-07-25 10:22:34.146082] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:30.457 00:07:30.716 Malloc0 00:07:30.716 Malloc1 00:07:30.716 Malloc2 00:07:30.716 Malloc3 00:07:30.716 Malloc4 00:07:30.716 Malloc5 00:07:30.716 Malloc6 00:07:30.716 Malloc7 00:07:30.716 Malloc8 00:07:30.716 Malloc9 00:07:30.716 [2024-07-25 10:22:34.320214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:30.716 [2024-07-25 10:22:34.320304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.716 [2024-07-25 10:22:34.320330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2018dd0 00:07:30.716 [2024-07-25 10:22:34.320347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.716 [2024-07-25 10:22:34.321731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.716 [2024-07-25 10:22:34.321760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:30.716 TestPT 00:07:30.716 10:22:34 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.716 10:22:34 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:07:30.716 5000+0 records in 00:07:30.716 5000+0 records out 00:07:30.716 10240000 bytes (10 MB, 9.8 MiB) copied, 0.013383 s, 765 MB/s 00:07:30.716 10:22:34 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:07:30.716 10:22:34 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.716 10:22:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:30.716 AIO0 00:07:30.716 10:22:34 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.716 10:22:34 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:30.716 10:22:34 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.716 10:22:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:30.716 10:22:34 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.716 10:22:34 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:07:30.716 10:22:34 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:30.716 10:22:34 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.716 10:22:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:30.975 10:22:34 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.975 10:22:34 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:30.975 10:22:34 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.975 10:22:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:30.975 10:22:34 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.975 10:22:34 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:30.975 10:22:34 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.975 10:22:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:30.975 10:22:34 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.975 10:22:34 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:30.975 10:22:34 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:30.975 10:22:34 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.975 10:22:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:30.975 10:22:34 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:30.975 10:22:34 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.975 10:22:34 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:30.975 10:22:34 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:30.977 10:22:34 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "e807163a-5629-474d-9939-600aaf58bad5"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e807163a-5629-474d-9939-600aaf58bad5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "9f630bed-d2ba-59b0-80a8-a467d31b6b76"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9f630bed-d2ba-59b0-80a8-a467d31b6b76",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "5fbd16a3-a729-5463-a69f-d7383ca6728a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5fbd16a3-a729-5463-a69f-d7383ca6728a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "2b010829-5020-5838-b89c-d21bacb58b31"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2b010829-5020-5838-b89c-d21bacb58b31",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "5eb5676b-4ad2-5fc4-8fe1-e7ff605398af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5eb5676b-4ad2-5fc4-8fe1-e7ff605398af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "37d9ed11-2919-5a50-9fce-39b2244a7e10"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "37d9ed11-2919-5a50-9fce-39b2244a7e10",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "8fee3a37-c1bb-5b67-907a-e8eabdc7c784"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8fee3a37-c1bb-5b67-907a-e8eabdc7c784",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7dc0b31f-e33e-5e4c-b261-c47486217aea"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7dc0b31f-e33e-5e4c-b261-c47486217aea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "d9e0f6d9-a91d-5685-b377-81284be87ab2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d9e0f6d9-a91d-5685-b377-81284be87ab2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "4dcea207-0b8f-5362-bff0-bcb2af36571f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4dcea207-0b8f-5362-bff0-bcb2af36571f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "baacc9bd-bdcc-5dcf-9355-25dbebd675e3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "baacc9bd-bdcc-5dcf-9355-25dbebd675e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "bb613ec1-124c-5a7b-8eaa-ad1a24c1c4dd"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bb613ec1-124c-5a7b-8eaa-ad1a24c1c4dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "490d2f68-c32b-437f-918d-2b25259f8234"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "490d2f68-c32b-437f-918d-2b25259f8234",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "490d2f68-c32b-437f-918d-2b25259f8234",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "991d188f-da68-4bc0-8c74-263cec302215",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f579d79b-83ac-42e4-84c2-b4ec6cdf0fc2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "1ba91fe5-1ed8-4c93-a76a-a92650a77cea"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1ba91fe5-1ed8-4c93-a76a-a92650a77cea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1ba91fe5-1ed8-4c93-a76a-a92650a77cea",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "766837c1-3ca0-44d2-bca5-69ceb9c43929",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e8f92c06-1f27-4c70-91b4-186f276e8757",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "496b1ef8-ae0e-43c2-adbb-4b410215129e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "496b1ef8-ae0e-43c2-adbb-4b410215129e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "496b1ef8-ae0e-43c2-adbb-4b410215129e",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "93093d37-4458-4138-b973-32f008a57910",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "a2dbf399-0dd5-4057-8b83-6e317688f0ba",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5944f032-24e4-43c3-80ee-802da6af41ea"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5944f032-24e4-43c3-80ee-802da6af41ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:07:30.977 10:22:34 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:30.977 10:22:34 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:07:30.977 10:22:34 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:30.977 10:22:34 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 2315571 00:07:30.977 10:22:34 blockdev_general -- common/autotest_common.sh@950 -- # '[' -z 2315571 ']' 00:07:30.977 10:22:34 blockdev_general -- common/autotest_common.sh@954 -- # kill -0 2315571 00:07:30.977 10:22:34 blockdev_general -- common/autotest_common.sh@955 -- # uname 00:07:30.977 10:22:34 blockdev_general -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.977 10:22:34 blockdev_general -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2315571 00:07:30.977 10:22:34 blockdev_general -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.977 10:22:34 blockdev_general -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.977 10:22:34 blockdev_general -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2315571' 00:07:30.977 killing process with pid 2315571 00:07:30.977 10:22:34 blockdev_general -- common/autotest_common.sh@969 -- # kill 2315571 00:07:30.977 10:22:34 blockdev_general -- common/autotest_common.sh@974 -- # wait 2315571 00:07:31.911 10:22:35 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:31.911 10:22:35 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:31.911 10:22:35 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:31.911 10:22:35 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.911 10:22:35 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:31.911 ************************************ 00:07:31.911 START TEST bdev_hello_world 00:07:31.911 ************************************ 00:07:31.911 10:22:35 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:31.911 [2024-07-25 10:22:35.348351] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:31.911 [2024-07-25 10:22:35.348425] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315860 ] 00:07:31.911 [2024-07-25 10:22:35.430990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.911 [2024-07-25 10:22:35.553957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.170 [2024-07-25 10:22:35.728326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:32.170 [2024-07-25 10:22:35.728404] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:32.170 [2024-07-25 10:22:35.728433] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:32.170 [2024-07-25 10:22:35.736323] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:32.170 [2024-07-25 10:22:35.736360] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:32.170 [2024-07-25 10:22:35.744330] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:32.170 [2024-07-25 10:22:35.744364] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:32.170 [2024-07-25 10:22:35.831026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:32.170 [2024-07-25 10:22:35.831110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.170 [2024-07-25 10:22:35.831135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2645230 00:07:32.170 [2024-07-25 10:22:35.831151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.170 [2024-07-25 10:22:35.832978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.170 [2024-07-25 10:22:35.833009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:32.428 [2024-07-25 10:22:35.989443] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:32.428 [2024-07-25 10:22:35.989500] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:07:32.428 [2024-07-25 10:22:35.989530] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:32.428 [2024-07-25 10:22:35.989574] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:32.428 [2024-07-25 10:22:35.989618] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:32.428 [2024-07-25 10:22:35.989639] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:32.428 [2024-07-25 10:22:35.989676] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:32.428 00:07:32.428 [2024-07-25 10:22:35.989704] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:32.686 00:07:32.686 real 0m1.089s 00:07:32.686 user 0m0.745s 00:07:32.686 sys 0m0.309s 00:07:32.686 10:22:36 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.686 10:22:36 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:32.686 ************************************ 00:07:32.686 END TEST bdev_hello_world 00:07:32.686 ************************************ 00:07:32.945 10:22:36 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:32.945 10:22:36 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:32.945 10:22:36 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.945 10:22:36 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:32.945 ************************************ 00:07:32.945 START TEST bdev_bounds 00:07:32.945 ************************************ 00:07:32.945 10:22:36 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:32.945 10:22:36 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=2316014 00:07:32.945 10:22:36 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:32.945 10:22:36 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:32.945 10:22:36 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 2316014' 00:07:32.945 Process bdevio pid: 2316014 00:07:32.945 10:22:36 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 2316014 00:07:32.945 10:22:36 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 2316014 ']' 00:07:32.945 10:22:36 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.945 10:22:36 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.945 10:22:36 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.945 10:22:36 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.945 10:22:36 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:32.945 [2024-07-25 10:22:36.485300] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:32.945 [2024-07-25 10:22:36.485371] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316014 ] 00:07:32.945 [2024-07-25 10:22:36.568733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.204 [2024-07-25 10:22:36.699127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.204 [2024-07-25 10:22:36.699215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.204 [2024-07-25 10:22:36.703131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.204 [2024-07-25 10:22:36.864441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:33.204 [2024-07-25 10:22:36.864515] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:33.204 [2024-07-25 10:22:36.864532] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:33.204 [2024-07-25 10:22:36.872444] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:33.204 [2024-07-25 10:22:36.872474] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:33.204 [2024-07-25 10:22:36.880440] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:33.204 [2024-07-25 10:22:36.880468] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:33.462 [2024-07-25 10:22:36.964243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:33.462 [2024-07-25 10:22:36.964318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.462 [2024-07-25 10:22:36.964337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2255bb0 00:07:33.462 [2024-07-25 10:22:36.964350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.462 [2024-07-25 10:22:36.965979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.462 [2024-07-25 10:22:36.966006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:34.028 10:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.028 10:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:34.028 10:22:37 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:34.028 I/O targets: 00:07:34.028 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:07:34.028 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:07:34.028 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:07:34.028 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:07:34.028 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:07:34.028 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:07:34.028 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:07:34.028 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:07:34.028 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:07:34.028 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:07:34.028 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:07:34.028 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:07:34.028 raid0: 131072 blocks of 512 bytes (64 MiB) 00:07:34.028 concat0: 131072 blocks of 512 bytes (64 MiB) 00:07:34.028 raid1: 65536 blocks of 512 bytes (32 MiB) 00:07:34.028 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:07:34.028 00:07:34.028 00:07:34.028 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.028 http://cunit.sourceforge.net/ 00:07:34.028 00:07:34.028 00:07:34.028 Suite: bdevio tests on: AIO0 00:07:34.028 Test: blockdev write read block ...passed 00:07:34.028 Test: blockdev write zeroes read block ...passed 00:07:34.028 Test: blockdev write zeroes read no split ...passed 00:07:34.028 Test: blockdev write zeroes read split ...passed 00:07:34.028 Test: blockdev write zeroes read split partial ...passed 00:07:34.028 Test: blockdev reset ...passed 00:07:34.028 Test: blockdev write read 8 blocks ...passed 00:07:34.028 Test: blockdev write read size > 128k ...passed 00:07:34.028 Test: blockdev write read invalid size ...passed 00:07:34.028 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.029 Test: blockdev write read max offset ...passed 00:07:34.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.029 Test: blockdev writev readv 8 blocks ...passed 00:07:34.029 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.029 Test: blockdev writev readv block ...passed 00:07:34.029 Test: blockdev writev readv size > 128k ...passed 00:07:34.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.029 Test: blockdev comparev and writev ...passed 00:07:34.029 Test: blockdev nvme passthru rw ...passed 00:07:34.029 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.029 Test: blockdev nvme admin passthru ...passed 00:07:34.029 Test: blockdev copy ...passed 00:07:34.029 Suite: bdevio tests on: raid1 00:07:34.029 Test: blockdev write read block ...passed 00:07:34.029 Test: blockdev write zeroes read block ...passed 00:07:34.029 Test: blockdev write zeroes read no split ...passed 00:07:34.029 Test: blockdev write zeroes read split ...passed 00:07:34.029 Test: blockdev write zeroes read split partial ...passed 00:07:34.029 Test: blockdev reset ...passed 00:07:34.029 Test: blockdev write read 8 blocks ...passed 00:07:34.029 Test: blockdev write read size > 128k ...passed 00:07:34.029 Test: blockdev write read invalid size ...passed 00:07:34.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.029 Test: blockdev write read max offset ...passed 00:07:34.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.029 Test: blockdev writev readv 8 blocks ...passed 00:07:34.029 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.029 Test: blockdev writev readv block ...passed 00:07:34.029 Test: blockdev writev readv size > 128k ...passed 00:07:34.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.029 Test: blockdev comparev and writev ...passed 00:07:34.029 Test: blockdev nvme passthru rw ...passed 00:07:34.029 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.029 Test: blockdev nvme admin passthru ...passed 00:07:34.029 Test: blockdev copy ...passed 00:07:34.029 Suite: bdevio tests on: concat0 00:07:34.029 Test: blockdev write read block ...passed 00:07:34.029 Test: blockdev write zeroes read block ...passed 00:07:34.029 Test: blockdev write zeroes read no split ...passed 00:07:34.029 Test: blockdev write zeroes read split ...passed 00:07:34.029 Test: blockdev write zeroes read split partial ...passed 00:07:34.029 Test: blockdev reset ...passed 00:07:34.029 Test: blockdev write read 8 blocks ...passed 00:07:34.029 Test: blockdev write read size > 128k ...passed 00:07:34.029 Test: blockdev write read invalid size ...passed 00:07:34.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.029 Test: blockdev write read max offset ...passed 00:07:34.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.029 Test: blockdev writev readv 8 blocks ...passed 00:07:34.029 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.029 Test: blockdev writev readv block ...passed 00:07:34.029 Test: blockdev writev readv size > 128k ...passed 00:07:34.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.029 Test: blockdev comparev and writev ...passed 00:07:34.029 Test: blockdev nvme passthru rw ...passed 00:07:34.029 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.029 Test: blockdev nvme admin passthru ...passed 00:07:34.029 Test: blockdev copy ...passed 00:07:34.029 Suite: bdevio tests on: raid0 00:07:34.029 Test: blockdev write read block ...passed 00:07:34.029 Test: blockdev write zeroes read block ...passed 00:07:34.029 Test: blockdev write zeroes read no split ...passed 00:07:34.029 Test: blockdev write zeroes read split ...passed 00:07:34.029 Test: blockdev write zeroes read split partial ...passed 00:07:34.029 Test: blockdev reset ...passed 00:07:34.029 Test: blockdev write read 8 blocks ...passed 00:07:34.029 Test: blockdev write read size > 128k ...passed 00:07:34.029 Test: blockdev write read invalid size ...passed 00:07:34.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.029 Test: blockdev write read max offset ...passed 00:07:34.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.029 Test: blockdev writev readv 8 blocks ...passed 00:07:34.029 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.029 Test: blockdev writev readv block ...passed 00:07:34.029 Test: blockdev writev readv size > 128k ...passed 00:07:34.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.029 Test: blockdev comparev and writev ...passed 00:07:34.029 Test: blockdev nvme passthru rw ...passed 00:07:34.029 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.029 Test: blockdev nvme admin passthru ...passed 00:07:34.029 Test: blockdev copy ...passed 00:07:34.029 Suite: bdevio tests on: TestPT 00:07:34.029 Test: blockdev write read block ...passed 00:07:34.029 Test: blockdev write zeroes read block ...passed 00:07:34.029 Test: blockdev write zeroes read no split ...passed 00:07:34.029 Test: blockdev write zeroes read split ...passed 00:07:34.029 Test: blockdev write zeroes read split partial ...passed 00:07:34.029 Test: blockdev reset ...passed 00:07:34.029 Test: blockdev write read 8 blocks ...passed 00:07:34.029 Test: blockdev write read size > 128k ...passed 00:07:34.029 Test: blockdev write read invalid size ...passed 00:07:34.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.029 Test: blockdev write read max offset ...passed 00:07:34.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.029 Test: blockdev writev readv 8 blocks ...passed 00:07:34.029 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.029 Test: blockdev writev readv block ...passed 00:07:34.029 Test: blockdev writev readv size > 128k ...passed 00:07:34.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.029 Test: blockdev comparev and writev ...passed 00:07:34.029 Test: blockdev nvme passthru rw ...passed 00:07:34.029 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.029 Test: blockdev nvme admin passthru ...passed 00:07:34.029 Test: blockdev copy ...passed 00:07:34.029 Suite: bdevio tests on: Malloc2p7 00:07:34.029 Test: blockdev write read block ...passed 00:07:34.029 Test: blockdev write zeroes read block ...passed 00:07:34.029 Test: blockdev write zeroes read no split ...passed 00:07:34.029 Test: blockdev write zeroes read split ...passed 00:07:34.029 Test: blockdev write zeroes read split partial ...passed 00:07:34.029 Test: blockdev reset ...passed 00:07:34.029 Test: blockdev write read 8 blocks ...passed 00:07:34.029 Test: blockdev write read size > 128k ...passed 00:07:34.029 Test: blockdev write read invalid size ...passed 00:07:34.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.029 Test: blockdev write read max offset ...passed 00:07:34.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.029 Test: blockdev writev readv 8 blocks ...passed 00:07:34.029 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.029 Test: blockdev writev readv block ...passed 00:07:34.029 Test: blockdev writev readv size > 128k ...passed 00:07:34.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.029 Test: blockdev comparev and writev ...passed 00:07:34.029 Test: blockdev nvme passthru rw ...passed 00:07:34.029 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.029 Test: blockdev nvme admin passthru ...passed 00:07:34.029 Test: blockdev copy ...passed 00:07:34.029 Suite: bdevio tests on: Malloc2p6 00:07:34.029 Test: blockdev write read block ...passed 00:07:34.029 Test: blockdev write zeroes read block ...passed 00:07:34.029 Test: blockdev write zeroes read no split ...passed 00:07:34.029 Test: blockdev write zeroes read split ...passed 00:07:34.029 Test: blockdev write zeroes read split partial ...passed 00:07:34.029 Test: blockdev reset ...passed 00:07:34.029 Test: blockdev write read 8 blocks ...passed 00:07:34.029 Test: blockdev write read size > 128k ...passed 00:07:34.029 Test: blockdev write read invalid size ...passed 00:07:34.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.029 Test: blockdev write read max offset ...passed 00:07:34.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.029 Test: blockdev writev readv 8 blocks ...passed 00:07:34.029 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.029 Test: blockdev writev readv block ...passed 00:07:34.029 Test: blockdev writev readv size > 128k ...passed 00:07:34.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.029 Test: blockdev comparev and writev ...passed 00:07:34.029 Test: blockdev nvme passthru rw ...passed 00:07:34.029 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.029 Test: blockdev nvme admin passthru ...passed 00:07:34.029 Test: blockdev copy ...passed 00:07:34.029 Suite: bdevio tests on: Malloc2p5 00:07:34.029 Test: blockdev write read block ...passed 00:07:34.029 Test: blockdev write zeroes read block ...passed 00:07:34.029 Test: blockdev write zeroes read no split ...passed 00:07:34.029 Test: blockdev write zeroes read split ...passed 00:07:34.029 Test: blockdev write zeroes read split partial ...passed 00:07:34.029 Test: blockdev reset ...passed 00:07:34.029 Test: blockdev write read 8 blocks ...passed 00:07:34.029 Test: blockdev write read size > 128k ...passed 00:07:34.029 Test: blockdev write read invalid size ...passed 00:07:34.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.029 Test: blockdev write read max offset ...passed 00:07:34.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.029 Test: blockdev writev readv 8 blocks ...passed 00:07:34.030 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.030 Test: blockdev writev readv block ...passed 00:07:34.030 Test: blockdev writev readv size > 128k ...passed 00:07:34.030 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.030 Test: blockdev comparev and writev ...passed 00:07:34.030 Test: blockdev nvme passthru rw ...passed 00:07:34.030 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.030 Test: blockdev nvme admin passthru ...passed 00:07:34.030 Test: blockdev copy ...passed 00:07:34.030 Suite: bdevio tests on: Malloc2p4 00:07:34.030 Test: blockdev write read block ...passed 00:07:34.030 Test: blockdev write zeroes read block ...passed 00:07:34.030 Test: blockdev write zeroes read no split ...passed 00:07:34.030 Test: blockdev write zeroes read split ...passed 00:07:34.030 Test: blockdev write zeroes read split partial ...passed 00:07:34.030 Test: blockdev reset ...passed 00:07:34.030 Test: blockdev write read 8 blocks ...passed 00:07:34.030 Test: blockdev write read size > 128k ...passed 00:07:34.030 Test: blockdev write read invalid size ...passed 00:07:34.030 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.030 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.030 Test: blockdev write read max offset ...passed 00:07:34.030 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.030 Test: blockdev writev readv 8 blocks ...passed 00:07:34.030 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.030 Test: blockdev writev readv block ...passed 00:07:34.030 Test: blockdev writev readv size > 128k ...passed 00:07:34.030 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.030 Test: blockdev comparev and writev ...passed 00:07:34.030 Test: blockdev nvme passthru rw ...passed 00:07:34.030 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.030 Test: blockdev nvme admin passthru ...passed 00:07:34.030 Test: blockdev copy ...passed 00:07:34.030 Suite: bdevio tests on: Malloc2p3 00:07:34.030 Test: blockdev write read block ...passed 00:07:34.030 Test: blockdev write zeroes read block ...passed 00:07:34.030 Test: blockdev write zeroes read no split ...passed 00:07:34.030 Test: blockdev write zeroes read split ...passed 00:07:34.030 Test: blockdev write zeroes read split partial ...passed 00:07:34.030 Test: blockdev reset ...passed 00:07:34.030 Test: blockdev write read 8 blocks ...passed 00:07:34.030 Test: blockdev write read size > 128k ...passed 00:07:34.030 Test: blockdev write read invalid size ...passed 00:07:34.030 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.030 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.030 Test: blockdev write read max offset ...passed 00:07:34.030 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.030 Test: blockdev writev readv 8 blocks ...passed 00:07:34.030 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.030 Test: blockdev writev readv block ...passed 00:07:34.030 Test: blockdev writev readv size > 128k ...passed 00:07:34.030 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.030 Test: blockdev comparev and writev ...passed 00:07:34.030 Test: blockdev nvme passthru rw ...passed 00:07:34.030 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.030 Test: blockdev nvme admin passthru ...passed 00:07:34.030 Test: blockdev copy ...passed 00:07:34.030 Suite: bdevio tests on: Malloc2p2 00:07:34.030 Test: blockdev write read block ...passed 00:07:34.030 Test: blockdev write zeroes read block ...passed 00:07:34.289 Test: blockdev write zeroes read no split ...passed 00:07:34.289 Test: blockdev write zeroes read split ...passed 00:07:34.289 Test: blockdev write zeroes read split partial ...passed 00:07:34.289 Test: blockdev reset ...passed 00:07:34.289 Test: blockdev write read 8 blocks ...passed 00:07:34.289 Test: blockdev write read size > 128k ...passed 00:07:34.289 Test: blockdev write read invalid size ...passed 00:07:34.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.289 Test: blockdev write read max offset ...passed 00:07:34.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.289 Test: blockdev writev readv 8 blocks ...passed 00:07:34.289 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.289 Test: blockdev writev readv block ...passed 00:07:34.289 Test: blockdev writev readv size > 128k ...passed 00:07:34.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.289 Test: blockdev comparev and writev ...passed 00:07:34.289 Test: blockdev nvme passthru rw ...passed 00:07:34.289 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.289 Test: blockdev nvme admin passthru ...passed 00:07:34.289 Test: blockdev copy ...passed 00:07:34.289 Suite: bdevio tests on: Malloc2p1 00:07:34.289 Test: blockdev write read block ...passed 00:07:34.289 Test: blockdev write zeroes read block ...passed 00:07:34.289 Test: blockdev write zeroes read no split ...passed 00:07:34.289 Test: blockdev write zeroes read split ...passed 00:07:34.289 Test: blockdev write zeroes read split partial ...passed 00:07:34.289 Test: blockdev reset ...passed 00:07:34.289 Test: blockdev write read 8 blocks ...passed 00:07:34.289 Test: blockdev write read size > 128k ...passed 00:07:34.289 Test: blockdev write read invalid size ...passed 00:07:34.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.289 Test: blockdev write read max offset ...passed 00:07:34.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.289 Test: blockdev writev readv 8 blocks ...passed 00:07:34.289 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.289 Test: blockdev writev readv block ...passed 00:07:34.289 Test: blockdev writev readv size > 128k ...passed 00:07:34.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.289 Test: blockdev comparev and writev ...passed 00:07:34.289 Test: blockdev nvme passthru rw ...passed 00:07:34.289 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.289 Test: blockdev nvme admin passthru ...passed 00:07:34.289 Test: blockdev copy ...passed 00:07:34.289 Suite: bdevio tests on: Malloc2p0 00:07:34.289 Test: blockdev write read block ...passed 00:07:34.289 Test: blockdev write zeroes read block ...passed 00:07:34.289 Test: blockdev write zeroes read no split ...passed 00:07:34.289 Test: blockdev write zeroes read split ...passed 00:07:34.289 Test: blockdev write zeroes read split partial ...passed 00:07:34.289 Test: blockdev reset ...passed 00:07:34.289 Test: blockdev write read 8 blocks ...passed 00:07:34.289 Test: blockdev write read size > 128k ...passed 00:07:34.289 Test: blockdev write read invalid size ...passed 00:07:34.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.289 Test: blockdev write read max offset ...passed 00:07:34.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.289 Test: blockdev writev readv 8 blocks ...passed 00:07:34.289 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.289 Test: blockdev writev readv block ...passed 00:07:34.289 Test: blockdev writev readv size > 128k ...passed 00:07:34.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.289 Test: blockdev comparev and writev ...passed 00:07:34.289 Test: blockdev nvme passthru rw ...passed 00:07:34.289 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.289 Test: blockdev nvme admin passthru ...passed 00:07:34.289 Test: blockdev copy ...passed 00:07:34.289 Suite: bdevio tests on: Malloc1p1 00:07:34.289 Test: blockdev write read block ...passed 00:07:34.289 Test: blockdev write zeroes read block ...passed 00:07:34.289 Test: blockdev write zeroes read no split ...passed 00:07:34.289 Test: blockdev write zeroes read split ...passed 00:07:34.289 Test: blockdev write zeroes read split partial ...passed 00:07:34.289 Test: blockdev reset ...passed 00:07:34.289 Test: blockdev write read 8 blocks ...passed 00:07:34.289 Test: blockdev write read size > 128k ...passed 00:07:34.289 Test: blockdev write read invalid size ...passed 00:07:34.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.289 Test: blockdev write read max offset ...passed 00:07:34.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.289 Test: blockdev writev readv 8 blocks ...passed 00:07:34.289 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.289 Test: blockdev writev readv block ...passed 00:07:34.289 Test: blockdev writev readv size > 128k ...passed 00:07:34.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.289 Test: blockdev comparev and writev ...passed 00:07:34.289 Test: blockdev nvme passthru rw ...passed 00:07:34.289 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.289 Test: blockdev nvme admin passthru ...passed 00:07:34.289 Test: blockdev copy ...passed 00:07:34.289 Suite: bdevio tests on: Malloc1p0 00:07:34.289 Test: blockdev write read block ...passed 00:07:34.289 Test: blockdev write zeroes read block ...passed 00:07:34.289 Test: blockdev write zeroes read no split ...passed 00:07:34.289 Test: blockdev write zeroes read split ...passed 00:07:34.289 Test: blockdev write zeroes read split partial ...passed 00:07:34.289 Test: blockdev reset ...passed 00:07:34.289 Test: blockdev write read 8 blocks ...passed 00:07:34.289 Test: blockdev write read size > 128k ...passed 00:07:34.289 Test: blockdev write read invalid size ...passed 00:07:34.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.289 Test: blockdev write read max offset ...passed 00:07:34.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.289 Test: blockdev writev readv 8 blocks ...passed 00:07:34.289 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.289 Test: blockdev writev readv block ...passed 00:07:34.289 Test: blockdev writev readv size > 128k ...passed 00:07:34.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.289 Test: blockdev comparev and writev ...passed 00:07:34.289 Test: blockdev nvme passthru rw ...passed 00:07:34.289 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.289 Test: blockdev nvme admin passthru ...passed 00:07:34.289 Test: blockdev copy ...passed 00:07:34.289 Suite: bdevio tests on: Malloc0 00:07:34.289 Test: blockdev write read block ...passed 00:07:34.289 Test: blockdev write zeroes read block ...passed 00:07:34.289 Test: blockdev write zeroes read no split ...passed 00:07:34.289 Test: blockdev write zeroes read split ...passed 00:07:34.289 Test: blockdev write zeroes read split partial ...passed 00:07:34.289 Test: blockdev reset ...passed 00:07:34.289 Test: blockdev write read 8 blocks ...passed 00:07:34.289 Test: blockdev write read size > 128k ...passed 00:07:34.289 Test: blockdev write read invalid size ...passed 00:07:34.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.289 Test: blockdev write read max offset ...passed 00:07:34.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.289 Test: blockdev writev readv 8 blocks ...passed 00:07:34.289 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.289 Test: blockdev writev readv block ...passed 00:07:34.289 Test: blockdev writev readv size > 128k ...passed 00:07:34.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.289 Test: blockdev comparev and writev ...passed 00:07:34.289 Test: blockdev nvme passthru rw ...passed 00:07:34.289 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.289 Test: blockdev nvme admin passthru ...passed 00:07:34.289 Test: blockdev copy ...passed 00:07:34.289 00:07:34.289 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.289 suites 16 16 n/a 0 0 00:07:34.289 tests 368 368 368 0 0 00:07:34.289 asserts 2224 2224 2224 0 n/a 00:07:34.289 00:07:34.289 Elapsed time = 0.524 seconds 00:07:34.289 0 00:07:34.289 10:22:37 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 2316014 00:07:34.289 10:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 2316014 ']' 00:07:34.289 10:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 2316014 00:07:34.289 10:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:34.289 10:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.289 10:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2316014 00:07:34.289 10:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.289 10:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.289 10:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2316014' 00:07:34.289 killing process with pid 2316014 00:07:34.289 10:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@969 -- # kill 2316014 00:07:34.289 10:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@974 -- # wait 2316014 00:07:34.548 10:22:38 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:34.548 00:07:34.548 real 0m1.781s 00:07:34.548 user 0m4.511s 00:07:34.548 sys 0m0.448s 00:07:34.548 10:22:38 blockdev_general.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.548 10:22:38 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:34.548 ************************************ 00:07:34.548 END TEST bdev_bounds 00:07:34.548 ************************************ 00:07:34.548 10:22:38 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:34.548 10:22:38 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:34.548 10:22:38 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.548 10:22:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:34.806 ************************************ 00:07:34.806 START TEST bdev_nbd 00:07:34.806 ************************************ 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=2316189 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 2316189 /var/tmp/spdk-nbd.sock 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 2316189 ']' 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:34.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.806 10:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:34.806 [2024-07-25 10:22:38.326914] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:34.806 [2024-07-25 10:22:38.326996] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.806 [2024-07-25 10:22:38.412657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.065 [2024-07-25 10:22:38.532100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.065 [2024-07-25 10:22:38.701765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:35.065 [2024-07-25 10:22:38.701824] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:35.065 [2024-07-25 10:22:38.701840] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:35.065 [2024-07-25 10:22:38.709782] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:35.065 [2024-07-25 10:22:38.709817] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:35.065 [2024-07-25 10:22:38.717789] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:35.065 [2024-07-25 10:22:38.717822] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:35.323 [2024-07-25 10:22:38.804766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:35.323 [2024-07-25 10:22:38.804843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.323 [2024-07-25 10:22:38.804865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b66150 00:07:35.323 [2024-07-25 10:22:38.804880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.323 [2024-07-25 10:22:38.806592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.323 [2024-07-25 10:22:38.806622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:35.581 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.839 1+0 records in 00:07:35.839 1+0 records out 00:07:35.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000146476 s, 28.0 MB/s 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:35.839 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:07:36.098 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:36.098 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:36.098 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:36.098 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:36.098 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:36.098 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:36.098 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:36.098 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:36.356 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:36.357 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:36.357 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:36.357 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.357 1+0 records in 00:07:36.357 1+0 records out 00:07:36.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217807 s, 18.8 MB/s 00:07:36.357 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:36.357 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:36.357 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:36.357 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:36.357 10:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:36.357 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.357 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:36.357 10:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.614 1+0 records in 00:07:36.614 1+0 records out 00:07:36.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209434 s, 19.6 MB/s 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:36.614 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.615 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:36.615 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:07:36.872 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.873 1+0 records in 00:07:36.873 1+0 records out 00:07:36.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216124 s, 19.0 MB/s 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:36.873 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.131 1+0 records in 00:07:37.131 1+0 records out 00:07:37.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309516 s, 13.2 MB/s 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:37.131 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.389 1+0 records in 00:07:37.389 1+0 records out 00:07:37.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251343 s, 16.3 MB/s 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:37.389 10:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.647 1+0 records in 00:07:37.647 1+0 records out 00:07:37.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329599 s, 12.4 MB/s 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:37.647 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:37.648 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.648 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:37.648 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.906 1+0 records in 00:07:37.906 1+0 records out 00:07:37.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302566 s, 13.5 MB/s 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:37.906 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.164 1+0 records in 00:07:38.164 1+0 records out 00:07:38.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258554 s, 15.8 MB/s 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:38.164 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.422 1+0 records in 00:07:38.422 1+0 records out 00:07:38.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361966 s, 11.3 MB/s 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:38.422 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:38.423 10:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.681 1+0 records in 00:07:38.681 1+0 records out 00:07:38.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431194 s, 9.5 MB/s 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:38.681 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.939 1+0 records in 00:07:38.939 1+0 records out 00:07:38.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557853 s, 7.3 MB/s 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:38.939 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.197 1+0 records in 00:07:39.197 1+0 records out 00:07:39.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400544 s, 10.2 MB/s 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:39.197 10:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.455 1+0 records in 00:07:39.455 1+0 records out 00:07:39.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385846 s, 10.6 MB/s 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:39.455 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.721 1+0 records in 00:07:39.721 1+0 records out 00:07:39.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376109 s, 10.9 MB/s 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:39.721 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.981 1+0 records in 00:07:39.981 1+0 records out 00:07:39.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445493 s, 9.2 MB/s 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:39.981 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.239 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd0", 00:07:40.239 "bdev_name": "Malloc0" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd1", 00:07:40.239 "bdev_name": "Malloc1p0" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd2", 00:07:40.239 "bdev_name": "Malloc1p1" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd3", 00:07:40.239 "bdev_name": "Malloc2p0" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd4", 00:07:40.239 "bdev_name": "Malloc2p1" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd5", 00:07:40.239 "bdev_name": "Malloc2p2" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd6", 00:07:40.239 "bdev_name": "Malloc2p3" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd7", 00:07:40.239 "bdev_name": "Malloc2p4" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd8", 00:07:40.239 "bdev_name": "Malloc2p5" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd9", 00:07:40.239 "bdev_name": "Malloc2p6" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd10", 00:07:40.239 "bdev_name": "Malloc2p7" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd11", 00:07:40.239 "bdev_name": "TestPT" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd12", 00:07:40.239 "bdev_name": "raid0" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd13", 00:07:40.239 "bdev_name": "concat0" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd14", 00:07:40.239 "bdev_name": "raid1" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd15", 00:07:40.239 "bdev_name": "AIO0" 00:07:40.239 } 00:07:40.239 ]' 00:07:40.239 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:40.239 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd0", 00:07:40.239 "bdev_name": "Malloc0" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd1", 00:07:40.239 "bdev_name": "Malloc1p0" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd2", 00:07:40.239 "bdev_name": "Malloc1p1" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd3", 00:07:40.239 "bdev_name": "Malloc2p0" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd4", 00:07:40.239 "bdev_name": "Malloc2p1" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd5", 00:07:40.239 "bdev_name": "Malloc2p2" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd6", 00:07:40.239 "bdev_name": "Malloc2p3" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd7", 00:07:40.239 "bdev_name": "Malloc2p4" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd8", 00:07:40.239 "bdev_name": "Malloc2p5" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd9", 00:07:40.239 "bdev_name": "Malloc2p6" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd10", 00:07:40.239 "bdev_name": "Malloc2p7" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd11", 00:07:40.239 "bdev_name": "TestPT" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd12", 00:07:40.239 "bdev_name": "raid0" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd13", 00:07:40.239 "bdev_name": "concat0" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd14", 00:07:40.239 "bdev_name": "raid1" 00:07:40.239 }, 00:07:40.239 { 00:07:40.239 "nbd_device": "/dev/nbd15", 00:07:40.239 "bdev_name": "AIO0" 00:07:40.239 } 00:07:40.239 ]' 00:07:40.239 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:40.239 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:07:40.239 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.239 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:07:40.239 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:40.239 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:40.239 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.239 10:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:40.497 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:40.497 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:40.497 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:40.497 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.497 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.497 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:40.497 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:40.497 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.497 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.497 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:40.754 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:40.754 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:40.754 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:40.754 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.754 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.754 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:40.754 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:40.754 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.754 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.754 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:41.013 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:41.013 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:41.013 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:41.013 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.013 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.013 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:41.013 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.013 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.013 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.013 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:41.273 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:41.273 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:41.273 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:41.273 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.273 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.273 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:41.273 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.273 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.273 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.273 10:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:41.532 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:41.532 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:41.532 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:41.532 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.532 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.532 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:41.532 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.532 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.532 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.532 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:41.790 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:41.790 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:41.790 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:41.790 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.790 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.790 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:41.790 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.790 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.790 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.790 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:42.048 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:42.048 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:42.048 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:42.048 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.048 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.048 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:42.048 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.048 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.048 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.048 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:07:42.306 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:07:42.306 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:07:42.306 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:07:42.306 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.306 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.306 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:07:42.306 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.306 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.306 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.306 10:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:07:42.564 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:07:42.564 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:07:42.564 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:07:42.564 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.564 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.564 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:07:42.564 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.564 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.564 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.564 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:07:42.828 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:07:42.828 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:07:42.828 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:07:42.828 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.828 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.828 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:07:42.828 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.828 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.828 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.828 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:43.140 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:43.140 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:43.140 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:43.140 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.140 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.140 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:43.140 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.140 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.140 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.140 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:43.398 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:43.398 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:43.398 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:43.398 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.398 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.398 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:43.398 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.398 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.398 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.398 10:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:43.656 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:43.656 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:43.656 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:43.656 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.656 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.656 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:43.656 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.656 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.656 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.656 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:43.914 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:43.914 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:43.914 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:43.914 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.914 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.914 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:43.914 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.914 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.914 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.914 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:44.172 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:44.172 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:44.172 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:44.172 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.172 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.172 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:44.172 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.172 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.172 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.172 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:07:44.430 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:07:44.430 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:07:44.430 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:07:44.430 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.430 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.430 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:07:44.430 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.430 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.430 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:44.430 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.430 10:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:07:44.688 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.689 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:44.689 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:44.689 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:44.689 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:44.689 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:44.689 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:44.689 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:44.689 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:44.947 /dev/nbd0 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.947 1+0 records in 00:07:44.947 1+0 records out 00:07:44.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192148 s, 21.3 MB/s 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:44.947 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:07:45.205 /dev/nbd1 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.205 1+0 records in 00:07:45.205 1+0 records out 00:07:45.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019723 s, 20.8 MB/s 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:45.205 10:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:07:45.463 /dev/nbd10 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.463 1+0 records in 00:07:45.463 1+0 records out 00:07:45.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256233 s, 16.0 MB/s 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:45.463 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:07:45.721 /dev/nbd11 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.721 1+0 records in 00:07:45.721 1+0 records out 00:07:45.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027589 s, 14.8 MB/s 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:45.721 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:07:45.980 /dev/nbd12 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.980 1+0 records in 00:07:45.980 1+0 records out 00:07:45.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370679 s, 11.0 MB/s 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:45.980 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:07:46.238 /dev/nbd13 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.238 1+0 records in 00:07:46.238 1+0 records out 00:07:46.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331769 s, 12.3 MB/s 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:46.238 10:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:07:46.496 /dev/nbd14 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.496 1+0 records in 00:07:46.496 1+0 records out 00:07:46.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375066 s, 10.9 MB/s 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:46.496 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:07:46.755 /dev/nbd15 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.755 1+0 records in 00:07:46.755 1+0 records out 00:07:46.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375026 s, 10.9 MB/s 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.755 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.014 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.014 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:47.014 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.014 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:47.014 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:07:47.014 /dev/nbd2 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.272 1+0 records in 00:07:47.272 1+0 records out 00:07:47.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405307 s, 10.1 MB/s 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:47.272 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:07:47.272 /dev/nbd3 00:07:47.531 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:07:47.531 10:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:07:47.531 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:47.531 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:47.531 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.531 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.531 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:47.531 10:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:47.531 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.531 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.531 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.531 1+0 records in 00:07:47.531 1+0 records out 00:07:47.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324603 s, 12.6 MB/s 00:07:47.531 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.531 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:47.531 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.531 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.531 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:47.531 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.531 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:47.531 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:07:47.790 /dev/nbd4 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.790 1+0 records in 00:07:47.790 1+0 records out 00:07:47.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408245 s, 10.0 MB/s 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:47.790 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:07:48.048 /dev/nbd5 00:07:48.048 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:07:48.048 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:07:48.048 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.049 1+0 records in 00:07:48.049 1+0 records out 00:07:48.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396811 s, 10.3 MB/s 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:48.049 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:07:48.307 /dev/nbd6 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.307 1+0 records in 00:07:48.307 1+0 records out 00:07:48.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048755 s, 8.4 MB/s 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:48.307 10:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:07:48.565 /dev/nbd7 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.565 1+0 records in 00:07:48.565 1+0 records out 00:07:48.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543984 s, 7.5 MB/s 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:48.565 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:07:48.823 /dev/nbd8 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.823 1+0 records in 00:07:48.823 1+0 records out 00:07:48.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520825 s, 7.9 MB/s 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:48.823 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.824 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.824 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.824 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:48.824 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:07:49.082 /dev/nbd9 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.082 1+0 records in 00:07:49.082 1+0 records out 00:07:49.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446901 s, 9.2 MB/s 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.082 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd0", 00:07:49.341 "bdev_name": "Malloc0" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd1", 00:07:49.341 "bdev_name": "Malloc1p0" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd10", 00:07:49.341 "bdev_name": "Malloc1p1" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd11", 00:07:49.341 "bdev_name": "Malloc2p0" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd12", 00:07:49.341 "bdev_name": "Malloc2p1" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd13", 00:07:49.341 "bdev_name": "Malloc2p2" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd14", 00:07:49.341 "bdev_name": "Malloc2p3" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd15", 00:07:49.341 "bdev_name": "Malloc2p4" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd2", 00:07:49.341 "bdev_name": "Malloc2p5" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd3", 00:07:49.341 "bdev_name": "Malloc2p6" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd4", 00:07:49.341 "bdev_name": "Malloc2p7" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd5", 00:07:49.341 "bdev_name": "TestPT" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd6", 00:07:49.341 "bdev_name": "raid0" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd7", 00:07:49.341 "bdev_name": "concat0" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd8", 00:07:49.341 "bdev_name": "raid1" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd9", 00:07:49.341 "bdev_name": "AIO0" 00:07:49.341 } 00:07:49.341 ]' 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd0", 00:07:49.341 "bdev_name": "Malloc0" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd1", 00:07:49.341 "bdev_name": "Malloc1p0" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd10", 00:07:49.341 "bdev_name": "Malloc1p1" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd11", 00:07:49.341 "bdev_name": "Malloc2p0" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd12", 00:07:49.341 "bdev_name": "Malloc2p1" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd13", 00:07:49.341 "bdev_name": "Malloc2p2" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd14", 00:07:49.341 "bdev_name": "Malloc2p3" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd15", 00:07:49.341 "bdev_name": "Malloc2p4" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd2", 00:07:49.341 "bdev_name": "Malloc2p5" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd3", 00:07:49.341 "bdev_name": "Malloc2p6" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd4", 00:07:49.341 "bdev_name": "Malloc2p7" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd5", 00:07:49.341 "bdev_name": "TestPT" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd6", 00:07:49.341 "bdev_name": "raid0" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd7", 00:07:49.341 "bdev_name": "concat0" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd8", 00:07:49.341 "bdev_name": "raid1" 00:07:49.341 }, 00:07:49.341 { 00:07:49.341 "nbd_device": "/dev/nbd9", 00:07:49.341 "bdev_name": "AIO0" 00:07:49.341 } 00:07:49.341 ]' 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:49.341 /dev/nbd1 00:07:49.341 /dev/nbd10 00:07:49.341 /dev/nbd11 00:07:49.341 /dev/nbd12 00:07:49.341 /dev/nbd13 00:07:49.341 /dev/nbd14 00:07:49.341 /dev/nbd15 00:07:49.341 /dev/nbd2 00:07:49.341 /dev/nbd3 00:07:49.341 /dev/nbd4 00:07:49.341 /dev/nbd5 00:07:49.341 /dev/nbd6 00:07:49.341 /dev/nbd7 00:07:49.341 /dev/nbd8 00:07:49.341 /dev/nbd9' 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:49.341 /dev/nbd1 00:07:49.341 /dev/nbd10 00:07:49.341 /dev/nbd11 00:07:49.341 /dev/nbd12 00:07:49.341 /dev/nbd13 00:07:49.341 /dev/nbd14 00:07:49.341 /dev/nbd15 00:07:49.341 /dev/nbd2 00:07:49.341 /dev/nbd3 00:07:49.341 /dev/nbd4 00:07:49.341 /dev/nbd5 00:07:49.341 /dev/nbd6 00:07:49.341 /dev/nbd7 00:07:49.341 /dev/nbd8 00:07:49.341 /dev/nbd9' 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:49.341 256+0 records in 00:07:49.341 256+0 records out 00:07:49.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490732 s, 214 MB/s 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.341 10:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:49.600 256+0 records in 00:07:49.600 256+0 records out 00:07:49.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135649 s, 7.7 MB/s 00:07:49.600 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.600 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:49.600 256+0 records in 00:07:49.600 256+0 records out 00:07:49.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139479 s, 7.5 MB/s 00:07:49.600 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.600 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:49.857 256+0 records in 00:07:49.857 256+0 records out 00:07:49.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138018 s, 7.6 MB/s 00:07:49.857 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.857 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:49.857 256+0 records in 00:07:49.857 256+0 records out 00:07:49.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138199 s, 7.6 MB/s 00:07:49.857 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.857 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:50.114 256+0 records in 00:07:50.114 256+0 records out 00:07:50.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137828 s, 7.6 MB/s 00:07:50.114 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.114 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:50.371 256+0 records in 00:07:50.371 256+0 records out 00:07:50.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137962 s, 7.6 MB/s 00:07:50.371 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.371 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:50.371 256+0 records in 00:07:50.371 256+0 records out 00:07:50.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13737 s, 7.6 MB/s 00:07:50.372 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.372 10:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:07:50.629 256+0 records in 00:07:50.629 256+0 records out 00:07:50.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13669 s, 7.7 MB/s 00:07:50.629 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.629 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:07:50.629 256+0 records in 00:07:50.629 256+0 records out 00:07:50.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136764 s, 7.7 MB/s 00:07:50.629 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.629 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:07:50.888 256+0 records in 00:07:50.888 256+0 records out 00:07:50.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136852 s, 7.7 MB/s 00:07:50.888 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.888 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:07:50.888 256+0 records in 00:07:50.888 256+0 records out 00:07:50.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13677 s, 7.7 MB/s 00:07:50.888 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.888 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:07:51.147 256+0 records in 00:07:51.147 256+0 records out 00:07:51.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137001 s, 7.7 MB/s 00:07:51.147 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.147 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:07:51.147 256+0 records in 00:07:51.147 256+0 records out 00:07:51.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137778 s, 7.6 MB/s 00:07:51.147 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.147 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:07:51.405 256+0 records in 00:07:51.405 256+0 records out 00:07:51.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138903 s, 7.5 MB/s 00:07:51.405 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.405 10:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:07:51.405 256+0 records in 00:07:51.405 256+0 records out 00:07:51.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142666 s, 7.3 MB/s 00:07:51.405 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.405 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:07:51.664 256+0 records in 00:07:51.664 256+0 records out 00:07:51.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121924 s, 8.6 MB/s 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.664 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:51.923 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:51.923 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:51.923 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:51.923 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.923 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.923 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:51.923 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.923 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.923 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.923 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:52.181 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:52.181 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:52.181 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:52.181 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.181 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.181 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:52.181 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.181 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.181 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.181 10:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:52.439 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:52.439 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:52.439 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:52.439 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.439 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.439 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:52.439 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.439 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.439 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.439 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:52.698 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:52.698 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:52.698 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:52.698 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.698 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.698 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:52.698 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.698 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.698 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.698 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:52.955 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:52.955 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:52.955 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:52.955 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.955 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.955 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:52.955 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.955 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.955 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.955 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:53.213 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:53.213 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:53.213 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:53.213 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.213 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.213 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:53.213 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:53.213 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.213 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.213 10:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:53.471 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:53.471 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:53.471 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:53.471 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.471 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.471 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:53.471 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:53.471 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.471 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.471 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:07:53.729 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:07:53.729 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:07:53.729 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:07:53.729 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.729 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.729 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:07:53.729 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:53.729 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.729 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.729 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:53.987 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:53.987 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:53.987 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:53.987 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.987 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.987 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:53.987 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:53.987 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.987 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.987 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:54.245 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:54.245 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:54.245 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:54.245 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.245 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.245 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:54.245 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.245 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.245 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.245 10:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:54.502 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:54.502 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:54.502 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:54.502 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.502 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.502 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:54.502 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.502 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.502 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.502 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:54.759 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:54.759 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:54.759 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:54.759 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.759 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.759 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:54.759 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.759 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.759 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.759 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:55.016 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:55.016 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:55.016 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:55.016 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.016 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.016 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:55.016 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.016 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.016 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.016 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:07:55.273 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:07:55.273 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:07:55.273 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:07:55.273 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.273 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.273 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:07:55.273 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.273 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.273 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.273 10:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:07:55.530 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:07:55.530 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:07:55.530 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:07:55.530 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.530 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.530 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:07:55.530 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.530 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.530 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.530 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:07:55.788 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:07:55.788 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:07:55.788 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:07:55.788 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.788 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.788 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:07:55.788 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.788 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.788 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:55.788 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.788 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.045 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:56.045 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:56.045 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:56.303 10:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:56.561 malloc_lvol_verify 00:07:56.561 10:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:56.819 892ed3ef-f9c9-431f-b51b-b1b5ad99bb49 00:07:56.819 10:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:57.076 607850f7-1ec3-4652-9f38-5f6220a92b88 00:07:57.076 10:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:57.334 /dev/nbd0 00:07:57.334 10:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:57.334 mke2fs 1.46.5 (30-Dec-2021) 00:07:57.334 Discarding device blocks: 0/4096 done 00:07:57.334 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:57.334 00:07:57.334 Allocating group tables: 0/1 done 00:07:57.334 Writing inode tables: 0/1 done 00:07:57.334 Creating journal (1024 blocks): done 00:07:57.334 Writing superblocks and filesystem accounting information: 0/1 done 00:07:57.334 00:07:57.334 10:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:57.334 10:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:57.334 10:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.334 10:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:57.334 10:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:57.334 10:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:57.334 10:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.334 10:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 2316189 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 2316189 ']' 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 2316189 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2316189 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2316189' 00:07:57.592 killing process with pid 2316189 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@969 -- # kill 2316189 00:07:57.592 10:23:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@974 -- # wait 2316189 00:07:58.159 10:23:01 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:58.159 00:07:58.159 real 0m23.310s 00:07:58.159 user 0m29.940s 00:07:58.159 sys 0m12.341s 00:07:58.159 10:23:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.159 10:23:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:58.159 ************************************ 00:07:58.159 END TEST bdev_nbd 00:07:58.159 ************************************ 00:07:58.159 10:23:01 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:58.159 10:23:01 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:07:58.159 10:23:01 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:07:58.159 10:23:01 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:07:58.159 10:23:01 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:58.159 10:23:01 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.159 10:23:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:58.159 ************************************ 00:07:58.159 START TEST bdev_fio 00:07:58.159 ************************************ 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:07:58.159 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:07:58.159 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.160 10:23:01 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:07:58.160 ************************************ 00:07:58.160 START TEST bdev_fio_rw_verify 00:07:58.160 ************************************ 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:07:58.160 10:23:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:07:58.419 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:58.419 fio-3.35 00:07:58.419 Starting 16 threads 00:08:10.683 00:08:10.683 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=2319759: Thu Jul 25 10:23:12 2024 00:08:10.683 read: IOPS=105k, BW=411MiB/s (431MB/s)(4115MiB/10001msec) 00:08:10.683 slat (usec): min=2, max=1055, avg=29.74, stdev=11.48 00:08:10.683 clat (usec): min=10, max=1413, avg=240.10, stdev=108.23 00:08:10.683 lat (usec): min=18, max=1442, avg=269.83, stdev=113.64 00:08:10.683 clat percentiles (usec): 00:08:10.683 | 50.000th=[ 233], 99.000th=[ 465], 99.900th=[ 586], 99.990th=[ 717], 00:08:10.683 | 99.999th=[ 799] 00:08:10.683 write: IOPS=166k, BW=650MiB/s (682MB/s)(6420MiB/9874msec); 0 zone resets 00:08:10.683 slat (usec): min=4, max=690, avg=42.30, stdev=11.85 00:08:10.683 clat (usec): min=13, max=1138, avg=286.09, stdev=128.35 00:08:10.683 lat (usec): min=32, max=1181, avg=328.39, stdev=133.61 00:08:10.683 clat percentiles (usec): 00:08:10.683 | 50.000th=[ 273], 99.000th=[ 603], 99.900th=[ 766], 99.990th=[ 832], 00:08:10.683 | 99.999th=[ 914] 00:08:10.683 bw ( KiB/s): min=543229, max=840939, per=98.99%, avg=659088.58, stdev=5637.21, samples=304 00:08:10.683 iops : min=135807, max=210236, avg=164772.00, stdev=1409.29, samples=304 00:08:10.683 lat (usec) : 20=0.01%, 50=0.89%, 100=6.33%, 250=41.10%, 500=48.77% 00:08:10.683 lat (usec) : 750=2.80%, 1000=0.10% 00:08:10.683 lat (msec) : 2=0.01% 00:08:10.683 cpu : usr=98.96%, sys=0.48%, ctx=705, majf=0, minf=2584 00:08:10.683 IO depths : 1=12.4%, 2=24.8%, 4=50.3%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:10.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:10.683 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:10.683 issued rwts: total=1053467,1643594,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:10.683 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:10.683 00:08:10.683 Run status group 0 (all jobs): 00:08:10.683 READ: bw=411MiB/s (431MB/s), 411MiB/s-411MiB/s (431MB/s-431MB/s), io=4115MiB (4315MB), run=10001-10001msec 00:08:10.683 WRITE: bw=650MiB/s (682MB/s), 650MiB/s-650MiB/s (682MB/s-682MB/s), io=6420MiB (6732MB), run=9874-9874msec 00:08:10.683 00:08:10.683 real 0m11.624s 00:08:10.683 user 2m40.832s 00:08:10.683 sys 0m1.506s 00:08:10.683 10:23:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.683 10:23:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:08:10.683 ************************************ 00:08:10.683 END TEST bdev_fio_rw_verify 00:08:10.683 ************************************ 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:08:10.683 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:10.685 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "e807163a-5629-474d-9939-600aaf58bad5"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e807163a-5629-474d-9939-600aaf58bad5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "9f630bed-d2ba-59b0-80a8-a467d31b6b76"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9f630bed-d2ba-59b0-80a8-a467d31b6b76",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "5fbd16a3-a729-5463-a69f-d7383ca6728a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5fbd16a3-a729-5463-a69f-d7383ca6728a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "2b010829-5020-5838-b89c-d21bacb58b31"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2b010829-5020-5838-b89c-d21bacb58b31",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "5eb5676b-4ad2-5fc4-8fe1-e7ff605398af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5eb5676b-4ad2-5fc4-8fe1-e7ff605398af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "37d9ed11-2919-5a50-9fce-39b2244a7e10"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "37d9ed11-2919-5a50-9fce-39b2244a7e10",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "8fee3a37-c1bb-5b67-907a-e8eabdc7c784"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8fee3a37-c1bb-5b67-907a-e8eabdc7c784",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7dc0b31f-e33e-5e4c-b261-c47486217aea"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7dc0b31f-e33e-5e4c-b261-c47486217aea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "d9e0f6d9-a91d-5685-b377-81284be87ab2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d9e0f6d9-a91d-5685-b377-81284be87ab2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "4dcea207-0b8f-5362-bff0-bcb2af36571f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4dcea207-0b8f-5362-bff0-bcb2af36571f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "baacc9bd-bdcc-5dcf-9355-25dbebd675e3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "baacc9bd-bdcc-5dcf-9355-25dbebd675e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "bb613ec1-124c-5a7b-8eaa-ad1a24c1c4dd"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bb613ec1-124c-5a7b-8eaa-ad1a24c1c4dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "490d2f68-c32b-437f-918d-2b25259f8234"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "490d2f68-c32b-437f-918d-2b25259f8234",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "490d2f68-c32b-437f-918d-2b25259f8234",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "991d188f-da68-4bc0-8c74-263cec302215",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f579d79b-83ac-42e4-84c2-b4ec6cdf0fc2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "1ba91fe5-1ed8-4c93-a76a-a92650a77cea"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1ba91fe5-1ed8-4c93-a76a-a92650a77cea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1ba91fe5-1ed8-4c93-a76a-a92650a77cea",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "766837c1-3ca0-44d2-bca5-69ceb9c43929",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e8f92c06-1f27-4c70-91b4-186f276e8757",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "496b1ef8-ae0e-43c2-adbb-4b410215129e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "496b1ef8-ae0e-43c2-adbb-4b410215129e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "496b1ef8-ae0e-43c2-adbb-4b410215129e",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "93093d37-4458-4138-b973-32f008a57910",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "a2dbf399-0dd5-4057-8b83-6e317688f0ba",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5944f032-24e4-43c3-80ee-802da6af41ea"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5944f032-24e4-43c3-80ee-802da6af41ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:10.685 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:08:10.685 Malloc1p0 00:08:10.685 Malloc1p1 00:08:10.685 Malloc2p0 00:08:10.685 Malloc2p1 00:08:10.685 Malloc2p2 00:08:10.685 Malloc2p3 00:08:10.685 Malloc2p4 00:08:10.685 Malloc2p5 00:08:10.685 Malloc2p6 00:08:10.685 Malloc2p7 00:08:10.685 TestPT 00:08:10.685 raid0 00:08:10.685 concat0 ]] 00:08:10.685 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "e807163a-5629-474d-9939-600aaf58bad5"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e807163a-5629-474d-9939-600aaf58bad5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "9f630bed-d2ba-59b0-80a8-a467d31b6b76"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9f630bed-d2ba-59b0-80a8-a467d31b6b76",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "5fbd16a3-a729-5463-a69f-d7383ca6728a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5fbd16a3-a729-5463-a69f-d7383ca6728a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "2b010829-5020-5838-b89c-d21bacb58b31"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2b010829-5020-5838-b89c-d21bacb58b31",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "5eb5676b-4ad2-5fc4-8fe1-e7ff605398af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5eb5676b-4ad2-5fc4-8fe1-e7ff605398af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "37d9ed11-2919-5a50-9fce-39b2244a7e10"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "37d9ed11-2919-5a50-9fce-39b2244a7e10",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "8fee3a37-c1bb-5b67-907a-e8eabdc7c784"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8fee3a37-c1bb-5b67-907a-e8eabdc7c784",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7dc0b31f-e33e-5e4c-b261-c47486217aea"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7dc0b31f-e33e-5e4c-b261-c47486217aea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "d9e0f6d9-a91d-5685-b377-81284be87ab2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d9e0f6d9-a91d-5685-b377-81284be87ab2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "4dcea207-0b8f-5362-bff0-bcb2af36571f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4dcea207-0b8f-5362-bff0-bcb2af36571f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "baacc9bd-bdcc-5dcf-9355-25dbebd675e3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "baacc9bd-bdcc-5dcf-9355-25dbebd675e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "bb613ec1-124c-5a7b-8eaa-ad1a24c1c4dd"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bb613ec1-124c-5a7b-8eaa-ad1a24c1c4dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "490d2f68-c32b-437f-918d-2b25259f8234"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "490d2f68-c32b-437f-918d-2b25259f8234",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "490d2f68-c32b-437f-918d-2b25259f8234",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "991d188f-da68-4bc0-8c74-263cec302215",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f579d79b-83ac-42e4-84c2-b4ec6cdf0fc2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "1ba91fe5-1ed8-4c93-a76a-a92650a77cea"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1ba91fe5-1ed8-4c93-a76a-a92650a77cea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1ba91fe5-1ed8-4c93-a76a-a92650a77cea",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "766837c1-3ca0-44d2-bca5-69ceb9c43929",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e8f92c06-1f27-4c70-91b4-186f276e8757",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "496b1ef8-ae0e-43c2-adbb-4b410215129e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "496b1ef8-ae0e-43c2-adbb-4b410215129e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "496b1ef8-ae0e-43c2-adbb-4b410215129e",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "93093d37-4458-4138-b973-32f008a57910",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "a2dbf399-0dd5-4057-8b83-6e317688f0ba",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5944f032-24e4-43c3-80ee-802da6af41ea"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5944f032-24e4-43c3-80ee-802da6af41ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.686 10:23:13 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:10.686 ************************************ 00:08:10.686 START TEST bdev_fio_trim 00:08:10.686 ************************************ 00:08:10.686 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:10.687 10:23:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:10.687 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.687 fio-3.35 00:08:10.687 Starting 14 threads 00:08:22.887 00:08:22.888 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=2321318: Thu Jul 25 10:23:24 2024 00:08:22.888 write: IOPS=149k, BW=583MiB/s (611MB/s)(5831MiB/10001msec); 0 zone resets 00:08:22.888 slat (usec): min=3, max=3233, avg=32.59, stdev= 7.55 00:08:22.888 clat (usec): min=13, max=3523, avg=233.68, stdev=71.94 00:08:22.888 lat (usec): min=33, max=3541, avg=266.27, stdev=73.81 00:08:22.888 clat percentiles (usec): 00:08:22.888 | 50.000th=[ 229], 99.000th=[ 375], 99.900th=[ 408], 99.990th=[ 510], 00:08:22.888 | 99.999th=[ 594] 00:08:22.888 bw ( KiB/s): min=577580, max=640827, per=100.00%, avg=597289.05, stdev=1097.91, samples=266 00:08:22.888 iops : min=144395, max=160206, avg=149322.37, stdev=274.46, samples=266 00:08:22.888 trim: IOPS=149k, BW=583MiB/s (611MB/s)(5831MiB/10001msec); 0 zone resets 00:08:22.888 slat (usec): min=4, max=112, avg=22.82, stdev= 5.30 00:08:22.888 clat (usec): min=17, max=3541, avg=262.67, stdev=78.49 00:08:22.888 lat (usec): min=24, max=3558, avg=285.48, stdev=80.42 00:08:22.888 clat percentiles (usec): 00:08:22.888 | 50.000th=[ 260], 99.000th=[ 412], 99.900th=[ 445], 99.990th=[ 562], 00:08:22.888 | 99.999th=[ 660] 00:08:22.888 bw ( KiB/s): min=577580, max=640827, per=100.00%, avg=597289.89, stdev=1098.01, samples=266 00:08:22.888 iops : min=144395, max=160206, avg=149322.47, stdev=274.49, samples=266 00:08:22.888 lat (usec) : 20=0.01%, 50=0.02%, 100=1.33%, 250=51.02%, 500=47.61% 00:08:22.888 lat (usec) : 750=0.02%, 1000=0.01% 00:08:22.888 lat (msec) : 2=0.01%, 4=0.01% 00:08:22.888 cpu : usr=99.48%, sys=0.01%, ctx=611, majf=0, minf=1069 00:08:22.888 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:22.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:22.888 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:22.888 issued rwts: total=0,1492624,1492630,0 short=0,0,0,0 dropped=0,0,0,0 00:08:22.888 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:22.888 00:08:22.888 Run status group 0 (all jobs): 00:08:22.888 WRITE: bw=583MiB/s (611MB/s), 583MiB/s-583MiB/s (611MB/s-611MB/s), io=5831MiB (6114MB), run=10001-10001msec 00:08:22.888 TRIM: bw=583MiB/s (611MB/s), 583MiB/s-583MiB/s (611MB/s-611MB/s), io=5831MiB (6114MB), run=10001-10001msec 00:08:22.888 00:08:22.888 real 0m11.627s 00:08:22.888 user 2m21.898s 00:08:22.888 sys 0m0.671s 00:08:22.888 10:23:25 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.888 10:23:25 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:08:22.888 ************************************ 00:08:22.888 END TEST bdev_fio_trim 00:08:22.888 ************************************ 00:08:22.888 10:23:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:08:22.888 10:23:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:22.888 10:23:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:08:22.888 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:22.888 10:23:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:08:22.888 00:08:22.888 real 0m23.507s 00:08:22.888 user 5m2.883s 00:08:22.888 sys 0m2.296s 00:08:22.888 10:23:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.888 10:23:25 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:22.888 ************************************ 00:08:22.888 END TEST bdev_fio 00:08:22.888 ************************************ 00:08:22.888 10:23:25 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:22.888 10:23:25 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:22.888 10:23:25 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:22.888 10:23:25 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.888 10:23:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:22.888 ************************************ 00:08:22.888 START TEST bdev_verify 00:08:22.888 ************************************ 00:08:22.888 10:23:25 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:22.888 [2024-07-25 10:23:25.231187] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:08:22.888 [2024-07-25 10:23:25.231252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322611 ] 00:08:22.888 [2024-07-25 10:23:25.313296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:22.888 [2024-07-25 10:23:25.433885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.888 [2024-07-25 10:23:25.433890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.888 [2024-07-25 10:23:25.616512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:22.888 [2024-07-25 10:23:25.616585] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:22.888 [2024-07-25 10:23:25.616604] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:22.888 [2024-07-25 10:23:25.624511] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:22.888 [2024-07-25 10:23:25.624546] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:22.888 [2024-07-25 10:23:25.632520] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:22.888 [2024-07-25 10:23:25.632552] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:22.888 [2024-07-25 10:23:25.720478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:22.888 [2024-07-25 10:23:25.720558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.888 [2024-07-25 10:23:25.720584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2905aa0 00:08:22.888 [2024-07-25 10:23:25.720600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.888 [2024-07-25 10:23:25.722324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.888 [2024-07-25 10:23:25.722349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:22.888 Running I/O for 5 seconds... 00:08:28.154 00:08:28.154 Latency(us) 00:08:28.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.154 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x1000 00:08:28.154 Malloc0 : 5.13 1672.56 6.53 0.00 0.00 76428.52 512.76 143693.75 00:08:28.154 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x1000 length 0x1000 00:08:28.154 Malloc0 : 5.08 1712.88 6.69 0.00 0.00 74642.87 506.69 223696.21 00:08:28.154 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x800 00:08:28.154 Malloc1p0 : 5.13 848.42 3.31 0.00 0.00 150455.18 2791.35 140586.86 00:08:28.154 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x800 length 0x800 00:08:28.154 Malloc1p0 : 5.08 881.27 3.44 0.00 0.00 144884.12 2803.48 122722.23 00:08:28.154 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x800 00:08:28.154 Malloc1p1 : 5.13 848.08 3.31 0.00 0.00 150260.65 2657.85 137479.96 00:08:28.154 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x800 length 0x800 00:08:28.154 Malloc1p1 : 5.09 880.93 3.44 0.00 0.00 144707.19 2669.99 121168.78 00:08:28.154 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x200 00:08:28.154 Malloc2p0 : 5.13 847.75 3.31 0.00 0.00 150065.54 2560.76 135149.80 00:08:28.154 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x200 length 0x200 00:08:28.154 Malloc2p0 : 5.09 880.57 3.44 0.00 0.00 144526.81 2536.49 120392.06 00:08:28.154 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x200 00:08:28.154 Malloc2p1 : 5.14 847.42 3.31 0.00 0.00 149882.32 2487.94 133596.35 00:08:28.154 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x200 length 0x200 00:08:28.154 Malloc2p1 : 5.09 880.20 3.44 0.00 0.00 144361.36 2463.67 118838.61 00:08:28.154 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x200 00:08:28.154 Malloc2p2 : 5.14 847.09 3.31 0.00 0.00 149702.82 2415.12 131266.18 00:08:28.154 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x200 length 0x200 00:08:28.154 Malloc2p2 : 5.09 879.83 3.44 0.00 0.00 144197.19 2451.53 116508.44 00:08:28.154 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x200 00:08:28.154 Malloc2p3 : 5.14 846.76 3.31 0.00 0.00 149519.09 2402.99 128936.01 00:08:28.154 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x200 length 0x200 00:08:28.154 Malloc2p3 : 5.09 879.46 3.44 0.00 0.00 144036.76 2402.99 114955.00 00:08:28.154 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x200 00:08:28.154 Malloc2p4 : 5.14 846.43 3.31 0.00 0.00 149334.59 2463.67 126605.84 00:08:28.154 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x200 length 0x200 00:08:28.154 Malloc2p4 : 5.17 891.63 3.48 0.00 0.00 141874.14 2487.94 112624.83 00:08:28.154 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x200 00:08:28.154 Malloc2p5 : 5.14 846.10 3.31 0.00 0.00 149144.76 2536.49 124275.67 00:08:28.154 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x200 length 0x200 00:08:28.154 Malloc2p5 : 5.17 891.27 3.48 0.00 0.00 141716.50 2463.67 111071.38 00:08:28.154 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x200 00:08:28.154 Malloc2p6 : 5.15 845.76 3.30 0.00 0.00 148950.11 2524.35 121945.51 00:08:28.154 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x200 length 0x200 00:08:28.154 Malloc2p6 : 5.17 890.86 3.48 0.00 0.00 141550.27 2439.40 108741.21 00:08:28.154 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x200 00:08:28.154 Malloc2p7 : 5.15 845.41 3.30 0.00 0.00 148771.79 2196.67 122722.23 00:08:28.154 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x200 length 0x200 00:08:28.154 Malloc2p7 : 5.18 890.25 3.48 0.00 0.00 141429.65 2415.12 107187.77 00:08:28.154 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x1000 00:08:28.154 TestPT : 5.15 845.06 3.30 0.00 0.00 148546.29 2572.89 119615.34 00:08:28.154 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x1000 length 0x1000 00:08:28.154 TestPT : 5.18 864.98 3.38 0.00 0.00 145204.32 7378.87 168548.88 00:08:28.154 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x2000 00:08:28.154 raid0 : 5.15 844.71 3.30 0.00 0.00 148343.42 2718.53 118838.61 00:08:28.154 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x2000 length 0x2000 00:08:28.154 raid0 : 5.18 889.38 3.47 0.00 0.00 141071.75 2694.26 95148.56 00:08:28.154 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x2000 00:08:28.154 concat0 : 5.15 844.37 3.30 0.00 0.00 148148.05 2645.71 115731.72 00:08:28.154 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x2000 length 0x2000 00:08:28.154 concat0 : 5.18 889.08 3.47 0.00 0.00 140899.81 2730.67 93595.12 00:08:28.154 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x1000 00:08:28.154 raid1 : 5.17 866.09 3.38 0.00 0.00 144160.72 2815.62 111071.38 00:08:28.154 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x1000 length 0x1000 00:08:28.154 raid1 : 5.18 888.77 3.47 0.00 0.00 140689.76 3179.71 97867.09 00:08:28.154 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x0 length 0x4e2 00:08:28.154 AIO0 : 5.18 865.55 3.38 0.00 0.00 143907.09 1292.52 111071.38 00:08:28.154 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:28.154 Verification LBA range: start 0x4e2 length 0x4e2 00:08:28.154 AIO0 : 5.19 888.52 3.47 0.00 0.00 140482.22 1547.38 102527.43 00:08:28.154 =================================================================================================================== 00:08:28.154 Total : 29387.44 114.79 0.00 0.00 137596.50 506.69 223696.21 00:08:28.154 00:08:28.154 real 0m6.465s 00:08:28.154 user 0m11.930s 00:08:28.154 sys 0m0.440s 00:08:28.154 10:23:31 blockdev_general.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.154 10:23:31 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:28.154 ************************************ 00:08:28.154 END TEST bdev_verify 00:08:28.155 ************************************ 00:08:28.155 10:23:31 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:28.155 10:23:31 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:28.155 10:23:31 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.155 10:23:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:28.155 ************************************ 00:08:28.155 START TEST bdev_verify_big_io 00:08:28.155 ************************************ 00:08:28.155 10:23:31 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:28.155 [2024-07-25 10:23:31.745227] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:08:28.155 [2024-07-25 10:23:31.745300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2323378 ] 00:08:28.155 [2024-07-25 10:23:31.826898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:28.413 [2024-07-25 10:23:31.952347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.413 [2024-07-25 10:23:31.952353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.671 [2024-07-25 10:23:32.122331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:28.671 [2024-07-25 10:23:32.122410] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:28.671 [2024-07-25 10:23:32.122431] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:28.671 [2024-07-25 10:23:32.130336] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:28.671 [2024-07-25 10:23:32.130381] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:28.671 [2024-07-25 10:23:32.138345] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:28.671 [2024-07-25 10:23:32.138372] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:28.671 [2024-07-25 10:23:32.224657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:28.671 [2024-07-25 10:23:32.224732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.671 [2024-07-25 10:23:32.224758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x153faa0 00:08:28.671 [2024-07-25 10:23:32.224774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.671 [2024-07-25 10:23:32.226498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.671 [2024-07-25 10:23:32.226528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:28.929 [2024-07-25 10:23:32.403606] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.404779] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.406568] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.407706] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.409471] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.410595] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.412339] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.414118] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.415188] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.416914] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.418010] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.419790] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.420938] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.422732] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.423803] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.425596] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:28.929 [2024-07-25 10:23:32.455692] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:28.929 [2024-07-25 10:23:32.458324] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:28.929 Running I/O for 5 seconds... 00:08:35.488 00:08:35.489 Latency(us) 00:08:35.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.489 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x100 00:08:35.489 Malloc0 : 5.91 216.72 13.54 0.00 0.00 583101.08 807.06 1565873.49 00:08:35.489 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x100 length 0x100 00:08:35.489 Malloc0 : 5.45 258.38 16.15 0.00 0.00 488232.99 794.93 1715004.30 00:08:35.489 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x80 00:08:35.489 Malloc1p0 : 6.01 121.21 7.58 0.00 0.00 1009554.87 2621.44 1926272.95 00:08:35.489 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x80 length 0x80 00:08:35.489 Malloc1p0 : 6.04 52.94 3.31 0.00 0.00 2258479.63 1498.83 3479718.87 00:08:35.489 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x80 00:08:35.489 Malloc1p1 : 6.22 46.27 2.89 0.00 0.00 2559589.99 1523.11 4349648.59 00:08:35.489 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x80 length 0x80 00:08:35.489 Malloc1p1 : 6.12 54.86 3.43 0.00 0.00 2134650.13 1486.70 3355443.20 00:08:35.489 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x20 00:08:35.489 Malloc2p0 : 5.97 32.16 2.01 0.00 0.00 921897.38 612.88 1391887.55 00:08:35.489 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x20 length 0x20 00:08:35.489 Malloc2p0 : 5.82 41.22 2.58 0.00 0.00 711407.07 649.29 1106053.50 00:08:35.489 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x20 00:08:35.489 Malloc2p1 : 5.97 32.15 2.01 0.00 0.00 916686.21 694.80 1385673.77 00:08:35.489 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x20 length 0x20 00:08:35.489 Malloc2p1 : 5.82 41.21 2.58 0.00 0.00 707197.57 934.49 1087412.15 00:08:35.489 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x20 00:08:35.489 Malloc2p2 : 5.97 32.14 2.01 0.00 0.00 912151.13 885.95 1379459.98 00:08:35.489 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x20 length 0x20 00:08:35.489 Malloc2p2 : 5.83 41.20 2.57 0.00 0.00 702628.10 703.91 1068770.80 00:08:35.489 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x20 00:08:35.489 Malloc2p3 : 5.97 32.14 2.01 0.00 0.00 907430.28 685.70 1367032.41 00:08:35.489 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x20 length 0x20 00:08:35.489 Malloc2p3 : 5.83 41.19 2.57 0.00 0.00 698633.56 661.43 1050129.45 00:08:35.489 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x20 00:08:35.489 Malloc2p4 : 5.98 32.13 2.01 0.00 0.00 902561.27 646.26 1360818.63 00:08:35.489 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x20 length 0x20 00:08:35.489 Malloc2p4 : 5.83 41.18 2.57 0.00 0.00 694315.63 688.73 1031488.09 00:08:35.489 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x20 00:08:35.489 Malloc2p5 : 5.98 32.12 2.01 0.00 0.00 897576.54 670.53 1348391.06 00:08:35.489 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x20 length 0x20 00:08:35.489 Malloc2p5 : 5.90 43.40 2.71 0.00 0.00 659967.09 688.73 1019060.53 00:08:35.489 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x20 00:08:35.489 Malloc2p6 : 5.98 32.11 2.01 0.00 0.00 893035.56 673.56 1342177.28 00:08:35.489 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x20 length 0x20 00:08:35.489 Malloc2p6 : 5.90 43.39 2.71 0.00 0.00 655999.08 673.56 1000419.18 00:08:35.489 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x20 00:08:35.489 Malloc2p7 : 5.98 32.11 2.01 0.00 0.00 887485.59 670.53 1329749.71 00:08:35.489 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x20 length 0x20 00:08:35.489 Malloc2p7 : 5.90 43.38 2.71 0.00 0.00 652217.82 661.43 987991.61 00:08:35.489 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x100 00:08:35.489 TestPT : 6.22 44.02 2.75 0.00 0.00 2515804.26 75730.49 3902256.17 00:08:35.489 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x100 length 0x100 00:08:35.489 TestPT : 6.16 51.93 3.25 0.00 0.00 2108404.77 77672.30 2995043.75 00:08:35.489 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x200 00:08:35.489 raid0 : 6.23 46.26 2.89 0.00 0.00 2336880.56 1626.26 4051386.97 00:08:35.489 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x200 length 0x200 00:08:35.489 raid0 : 6.22 59.18 3.70 0.00 0.00 1804478.45 1565.58 2982616.18 00:08:35.489 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x200 00:08:35.489 concat0 : 6.23 48.80 3.05 0.00 0.00 2177119.61 1553.45 3951966.44 00:08:35.489 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x200 length 0x200 00:08:35.489 concat0 : 6.16 64.88 4.06 0.00 0.00 1623763.66 1589.85 2870768.07 00:08:35.489 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x100 00:08:35.489 raid1 : 6.17 59.65 3.73 0.00 0.00 1763839.84 2063.17 3827690.76 00:08:35.489 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x100 length 0x100 00:08:35.489 raid1 : 6.17 75.22 4.70 0.00 0.00 1389321.53 1990.35 2758919.96 00:08:35.489 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x0 length 0x4e 00:08:35.489 AIO0 : 6.23 68.40 4.27 0.00 0.00 923338.26 752.45 2299099.97 00:08:35.489 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:08:35.489 Verification LBA range: start 0x4e length 0x4e 00:08:35.489 AIO0 : 6.23 89.84 5.62 0.00 0.00 697383.46 694.80 1578301.06 00:08:35.489 =================================================================================================================== 00:08:35.489 Total : 1951.77 121.99 0.00 0.00 1132093.73 612.88 4349648.59 00:08:35.747 00:08:35.747 real 0m7.598s 00:08:35.747 user 0m14.191s 00:08:35.747 sys 0m0.444s 00:08:35.747 10:23:39 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.747 10:23:39 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:35.747 ************************************ 00:08:35.747 END TEST bdev_verify_big_io 00:08:35.747 ************************************ 00:08:35.747 10:23:39 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.747 10:23:39 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:35.747 10:23:39 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.747 10:23:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:35.747 ************************************ 00:08:35.747 START TEST bdev_write_zeroes 00:08:35.747 ************************************ 00:08:35.747 10:23:39 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.747 [2024-07-25 10:23:39.385837] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:08:35.747 [2024-07-25 10:23:39.385907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324322 ] 00:08:36.005 [2024-07-25 10:23:39.467324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.005 [2024-07-25 10:23:39.590723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.263 [2024-07-25 10:23:39.762446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:36.263 [2024-07-25 10:23:39.762515] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:36.263 [2024-07-25 10:23:39.762534] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:36.263 [2024-07-25 10:23:39.770451] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:36.263 [2024-07-25 10:23:39.770486] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:36.264 [2024-07-25 10:23:39.778452] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:36.264 [2024-07-25 10:23:39.778484] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:36.264 [2024-07-25 10:23:39.865154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:36.264 [2024-07-25 10:23:39.865226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.264 [2024-07-25 10:23:39.865249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25a02f0 00:08:36.264 [2024-07-25 10:23:39.865264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.264 [2024-07-25 10:23:39.866994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.264 [2024-07-25 10:23:39.867023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:36.521 Running I/O for 1 seconds... 00:08:37.896 00:08:37.896 Latency(us) 00:08:37.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.896 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 Malloc0 : 1.05 5123.25 20.01 0.00 0.00 24970.36 740.31 44079.03 00:08:37.896 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 Malloc1p0 : 1.05 5116.43 19.99 0.00 0.00 24952.24 995.18 43108.12 00:08:37.896 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 Malloc1p1 : 1.05 5109.60 19.96 0.00 0.00 24932.59 976.97 42331.40 00:08:37.896 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 Malloc2p0 : 1.05 5102.84 19.93 0.00 0.00 24913.57 995.18 41360.50 00:08:37.896 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 Malloc2p1 : 1.05 5096.13 19.91 0.00 0.00 24892.93 989.11 40389.59 00:08:37.896 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 Malloc2p2 : 1.06 5089.42 19.88 0.00 0.00 24872.93 995.18 39418.69 00:08:37.896 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 Malloc2p3 : 1.06 5082.72 19.85 0.00 0.00 24855.23 958.77 38447.79 00:08:37.896 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 Malloc2p4 : 1.06 5076.03 19.83 0.00 0.00 24826.16 1007.31 37476.88 00:08:37.896 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 Malloc2p5 : 1.06 5069.38 19.80 0.00 0.00 24805.10 1007.31 36311.80 00:08:37.896 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 Malloc2p6 : 1.06 5062.72 19.78 0.00 0.00 24781.98 983.04 35340.89 00:08:37.896 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 Malloc2p7 : 1.06 5056.11 19.75 0.00 0.00 24764.53 1001.24 34369.99 00:08:37.896 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 TestPT : 1.06 5049.35 19.72 0.00 0.00 24745.40 1019.45 33399.09 00:08:37.896 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 raid0 : 1.07 5041.75 19.69 0.00 0.00 24700.63 1759.76 31457.28 00:08:37.896 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 concat0 : 1.07 5034.28 19.67 0.00 0.00 24639.66 1711.22 29903.83 00:08:37.896 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 raid1 : 1.07 5025.08 19.63 0.00 0.00 24568.78 2742.80 26991.12 00:08:37.896 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.896 AIO0 : 1.07 5019.24 19.61 0.00 0.00 24475.50 1122.61 26020.22 00:08:37.896 =================================================================================================================== 00:08:37.896 Total : 81154.32 317.01 0.00 0.00 24793.60 740.31 44079.03 00:08:38.154 00:08:38.154 real 0m2.310s 00:08:38.154 user 0m1.890s 00:08:38.154 sys 0m0.328s 00:08:38.154 10:23:41 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.154 10:23:41 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:38.154 ************************************ 00:08:38.154 END TEST bdev_write_zeroes 00:08:38.154 ************************************ 00:08:38.154 10:23:41 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:38.154 10:23:41 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:38.154 10:23:41 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.154 10:23:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:38.154 ************************************ 00:08:38.154 START TEST bdev_json_nonenclosed 00:08:38.154 ************************************ 00:08:38.154 10:23:41 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:38.154 [2024-07-25 10:23:41.751049] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:08:38.154 [2024-07-25 10:23:41.751132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324605 ] 00:08:38.154 [2024-07-25 10:23:41.830683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.412 [2024-07-25 10:23:41.954093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.412 [2024-07-25 10:23:41.954213] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:38.412 [2024-07-25 10:23:41.954238] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:38.412 [2024-07-25 10:23:41.954253] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:38.412 00:08:38.412 real 0m0.390s 00:08:38.412 user 0m0.275s 00:08:38.412 sys 0m0.112s 00:08:38.412 10:23:42 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.412 10:23:42 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:38.412 ************************************ 00:08:38.412 END TEST bdev_json_nonenclosed 00:08:38.412 ************************************ 00:08:38.412 10:23:42 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:38.412 10:23:42 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:38.412 10:23:42 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.412 10:23:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:38.671 ************************************ 00:08:38.671 START TEST bdev_json_nonarray 00:08:38.671 ************************************ 00:08:38.671 10:23:42 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:38.671 [2024-07-25 10:23:42.189521] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:08:38.671 [2024-07-25 10:23:42.189594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324633 ] 00:08:38.671 [2024-07-25 10:23:42.271076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.929 [2024-07-25 10:23:42.393542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.929 [2024-07-25 10:23:42.393640] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:38.929 [2024-07-25 10:23:42.393665] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:38.929 [2024-07-25 10:23:42.393679] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:38.929 00:08:38.929 real 0m0.388s 00:08:38.929 user 0m0.262s 00:08:38.929 sys 0m0.123s 00:08:38.929 10:23:42 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.929 10:23:42 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:38.929 ************************************ 00:08:38.929 END TEST bdev_json_nonarray 00:08:38.929 ************************************ 00:08:38.929 10:23:42 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:08:38.929 10:23:42 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:08:38.929 10:23:42 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.929 10:23:42 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.929 10:23:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:38.929 ************************************ 00:08:38.929 START TEST bdev_qos 00:08:38.929 ************************************ 00:08:38.929 10:23:42 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # qos_test_suite '' 00:08:38.929 10:23:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=2324780 00:08:38.929 10:23:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:08:38.929 10:23:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 2324780' 00:08:38.929 Process qos testing pid: 2324780 00:08:38.929 10:23:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:08:38.929 10:23:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 2324780 00:08:38.929 10:23:42 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # '[' -z 2324780 ']' 00:08:38.929 10:23:42 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.929 10:23:42 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.930 10:23:42 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.930 10:23:42 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.930 10:23:42 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:38.930 [2024-07-25 10:23:42.617657] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:08:38.930 [2024-07-25 10:23:42.617726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324780 ] 00:08:39.188 [2024-07-25 10:23:42.694938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.188 [2024-07-25 10:23:42.806692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # return 0 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:40.122 Malloc_0 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_0 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:40.122 [ 00:08:40.122 { 00:08:40.122 "name": "Malloc_0", 00:08:40.122 "aliases": [ 00:08:40.122 "b689bf23-f6b3-452c-b000-12d85eedf492" 00:08:40.122 ], 00:08:40.122 "product_name": "Malloc disk", 00:08:40.122 "block_size": 512, 00:08:40.122 "num_blocks": 262144, 00:08:40.122 "uuid": "b689bf23-f6b3-452c-b000-12d85eedf492", 00:08:40.122 "assigned_rate_limits": { 00:08:40.122 "rw_ios_per_sec": 0, 00:08:40.122 "rw_mbytes_per_sec": 0, 00:08:40.122 "r_mbytes_per_sec": 0, 00:08:40.122 "w_mbytes_per_sec": 0 00:08:40.122 }, 00:08:40.122 "claimed": false, 00:08:40.122 "zoned": false, 00:08:40.122 "supported_io_types": { 00:08:40.122 "read": true, 00:08:40.122 "write": true, 00:08:40.122 "unmap": true, 00:08:40.122 "flush": true, 00:08:40.122 "reset": true, 00:08:40.122 "nvme_admin": false, 00:08:40.122 "nvme_io": false, 00:08:40.122 "nvme_io_md": false, 00:08:40.122 "write_zeroes": true, 00:08:40.122 "zcopy": true, 00:08:40.122 "get_zone_info": false, 00:08:40.122 "zone_management": false, 00:08:40.122 "zone_append": false, 00:08:40.122 "compare": false, 00:08:40.122 "compare_and_write": false, 00:08:40.122 "abort": true, 00:08:40.122 "seek_hole": false, 00:08:40.122 "seek_data": false, 00:08:40.122 "copy": true, 00:08:40.122 "nvme_iov_md": false 00:08:40.122 }, 00:08:40.122 "memory_domains": [ 00:08:40.122 { 00:08:40.122 "dma_device_id": "system", 00:08:40.122 "dma_device_type": 1 00:08:40.122 }, 00:08:40.122 { 00:08:40.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.122 "dma_device_type": 2 00:08:40.122 } 00:08:40.122 ], 00:08:40.122 "driver_specific": {} 00:08:40.122 } 00:08:40.122 ] 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:40.122 Null_1 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Null_1 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:40.122 [ 00:08:40.122 { 00:08:40.122 "name": "Null_1", 00:08:40.122 "aliases": [ 00:08:40.122 "f6698328-1a32-4fe6-b2c9-1cef9459dced" 00:08:40.122 ], 00:08:40.122 "product_name": "Null disk", 00:08:40.122 "block_size": 512, 00:08:40.122 "num_blocks": 262144, 00:08:40.122 "uuid": "f6698328-1a32-4fe6-b2c9-1cef9459dced", 00:08:40.122 "assigned_rate_limits": { 00:08:40.122 "rw_ios_per_sec": 0, 00:08:40.122 "rw_mbytes_per_sec": 0, 00:08:40.122 "r_mbytes_per_sec": 0, 00:08:40.122 "w_mbytes_per_sec": 0 00:08:40.122 }, 00:08:40.122 "claimed": false, 00:08:40.122 "zoned": false, 00:08:40.122 "supported_io_types": { 00:08:40.122 "read": true, 00:08:40.122 "write": true, 00:08:40.122 "unmap": false, 00:08:40.122 "flush": false, 00:08:40.122 "reset": true, 00:08:40.122 "nvme_admin": false, 00:08:40.122 "nvme_io": false, 00:08:40.122 "nvme_io_md": false, 00:08:40.122 "write_zeroes": true, 00:08:40.122 "zcopy": false, 00:08:40.122 "get_zone_info": false, 00:08:40.122 "zone_management": false, 00:08:40.122 "zone_append": false, 00:08:40.122 "compare": false, 00:08:40.122 "compare_and_write": false, 00:08:40.122 "abort": true, 00:08:40.122 "seek_hole": false, 00:08:40.122 "seek_data": false, 00:08:40.122 "copy": false, 00:08:40.122 "nvme_iov_md": false 00:08:40.122 }, 00:08:40.122 "driver_specific": {} 00:08:40.122 } 00:08:40.122 ] 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:08:40.122 10:23:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:08:40.122 Running I/O for 60 seconds... 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 66159.29 264637.18 0.00 0.00 266240.00 0.00 0.00 ' 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=66159.29 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 66159 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=66159 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=16000 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 16000 -gt 1000 ']' 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 16000 Malloc_0 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 16000 IOPS Malloc_0 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.388 10:23:48 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:45.388 ************************************ 00:08:45.388 START TEST bdev_qos_iops 00:08:45.388 ************************************ 00:08:45.388 10:23:48 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # run_qos_test 16000 IOPS Malloc_0 00:08:45.388 10:23:48 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=16000 00:08:45.388 10:23:48 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:08:45.388 10:23:48 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:08:45.388 10:23:48 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:08:45.388 10:23:48 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:08:45.388 10:23:48 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:08:45.388 10:23:48 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:45.388 10:23:48 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:08:45.388 10:23:48 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:08:50.653 10:23:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 15999.76 63999.05 0.00 0.00 64832.00 0.00 0.00 ' 00:08:50.653 10:23:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:08:50.653 10:23:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:08:50.653 10:23:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=15999.76 00:08:50.653 10:23:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 15999 00:08:50.653 10:23:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=15999 00:08:50.653 10:23:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:08:50.653 10:23:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=14400 00:08:50.653 10:23:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=17600 00:08:50.653 10:23:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 15999 -lt 14400 ']' 00:08:50.653 10:23:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 15999 -gt 17600 ']' 00:08:50.653 00:08:50.653 real 0m5.204s 00:08:50.653 user 0m0.091s 00:08:50.653 sys 0m0.034s 00:08:50.653 10:23:54 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.653 10:23:54 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:08:50.653 ************************************ 00:08:50.653 END TEST bdev_qos_iops 00:08:50.653 ************************************ 00:08:50.653 10:23:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:08:50.653 10:23:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:08:50.653 10:23:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:08:50.653 10:23:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:08:50.653 10:23:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:50.653 10:23:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:08:50.653 10:23:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 22662.15 90648.60 0.00 0.00 92160.00 0.00 0.00 ' 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=92160.00 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 92160 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=92160 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=9 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 9 -lt 2 ']' 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 9 Null_1 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 9 BANDWIDTH Null_1 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.916 10:23:59 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:55.916 ************************************ 00:08:55.916 START TEST bdev_qos_bw 00:08:55.916 ************************************ 00:08:55.916 10:23:59 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # run_qos_test 9 BANDWIDTH Null_1 00:08:55.916 10:23:59 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=9 00:08:55.916 10:23:59 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:08:55.916 10:23:59 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:08:55.916 10:23:59 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:08:55.916 10:23:59 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:08:55.916 10:23:59 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:08:55.916 10:23:59 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:55.916 10:23:59 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:08:55.916 10:23:59 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 2304.30 9217.20 0.00 0.00 9384.00 0.00 0.00 ' 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=9384.00 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 9384 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=9384 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=9216 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=8294 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=10137 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 9384 -lt 8294 ']' 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 9384 -gt 10137 ']' 00:09:01.180 00:09:01.180 real 0m5.222s 00:09:01.180 user 0m0.095s 00:09:01.180 sys 0m0.027s 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:09:01.180 ************************************ 00:09:01.180 END TEST bdev_qos_bw 00:09:01.180 ************************************ 00:09:01.180 10:24:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:09:01.180 10:24:04 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.180 10:24:04 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:01.180 10:24:04 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.180 10:24:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:09:01.180 10:24:04 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:01.180 10:24:04 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.180 10:24:04 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:01.180 ************************************ 00:09:01.180 START TEST bdev_qos_ro_bw 00:09:01.180 ************************************ 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:09:01.180 10:24:04 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 511.73 2046.94 0.00 0.00 2052.00 0.00 0.00 ' 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2052.00 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2052 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2052 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2052 -lt 1843 ']' 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2052 -gt 2252 ']' 00:09:06.505 00:09:06.505 real 0m5.151s 00:09:06.505 user 0m0.094s 00:09:06.505 sys 0m0.030s 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.505 10:24:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:09:06.505 ************************************ 00:09:06.505 END TEST bdev_qos_ro_bw 00:09:06.505 ************************************ 00:09:06.505 10:24:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:09:06.505 10:24:09 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.505 10:24:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:06.761 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.761 10:24:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:09:06.761 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.761 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:06.761 00:09:06.761 Latency(us) 00:09:06.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.761 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:06.761 Malloc_0 : 26.48 22194.86 86.70 0.00 0.00 11417.49 2099.58 503316.48 00:09:06.761 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:06.761 Null_1 : 26.63 22356.09 87.33 0.00 0.00 11424.77 788.86 141363.58 00:09:06.761 =================================================================================================================== 00:09:06.761 Total : 44550.95 174.03 0.00 0.00 11421.15 788.86 503316.48 00:09:06.761 0 00:09:06.761 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.761 10:24:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 2324780 00:09:06.761 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # '[' -z 2324780 ']' 00:09:06.761 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # kill -0 2324780 00:09:06.761 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # uname 00:09:07.018 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.018 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2324780 00:09:07.018 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:07.018 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:07.018 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2324780' 00:09:07.018 killing process with pid 2324780 00:09:07.018 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@969 -- # kill 2324780 00:09:07.018 Received shutdown signal, test time was about 26.666753 seconds 00:09:07.018 00:09:07.018 Latency(us) 00:09:07.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.018 =================================================================================================================== 00:09:07.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:07.018 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@974 -- # wait 2324780 00:09:07.276 10:24:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:09:07.276 00:09:07.276 real 0m28.190s 00:09:07.276 user 0m28.794s 00:09:07.276 sys 0m0.634s 00:09:07.276 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.276 10:24:10 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:07.276 ************************************ 00:09:07.276 END TEST bdev_qos 00:09:07.276 ************************************ 00:09:07.276 10:24:10 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:09:07.276 10:24:10 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:07.276 10:24:10 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.276 10:24:10 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:07.276 ************************************ 00:09:07.276 START TEST bdev_qd_sampling 00:09:07.276 ************************************ 00:09:07.276 10:24:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # qd_sampling_test_suite '' 00:09:07.276 10:24:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:09:07.276 10:24:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=2328819 00:09:07.276 10:24:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:09:07.276 10:24:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 2328819' 00:09:07.276 Process bdev QD sampling period testing pid: 2328819 00:09:07.276 10:24:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:09:07.276 10:24:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 2328819 00:09:07.276 10:24:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # '[' -z 2328819 ']' 00:09:07.276 10:24:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.276 10:24:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.276 10:24:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.276 10:24:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.276 10:24:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:07.276 [2024-07-25 10:24:10.863765] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:07.276 [2024-07-25 10:24:10.863851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2328819 ] 00:09:07.276 [2024-07-25 10:24:10.954194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:07.533 [2024-07-25 10:24:11.079289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.533 [2024-07-25 10:24:11.079294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.533 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.533 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # return 0 00:09:07.533 10:24:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:09:07.533 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.533 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:07.791 Malloc_QD 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_QD 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # local i 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:07.791 [ 00:09:07.791 { 00:09:07.791 "name": "Malloc_QD", 00:09:07.791 "aliases": [ 00:09:07.791 "67d9ea16-bbc8-4d7e-8621-f96b380b7dbe" 00:09:07.791 ], 00:09:07.791 "product_name": "Malloc disk", 00:09:07.791 "block_size": 512, 00:09:07.791 "num_blocks": 262144, 00:09:07.791 "uuid": "67d9ea16-bbc8-4d7e-8621-f96b380b7dbe", 00:09:07.791 "assigned_rate_limits": { 00:09:07.791 "rw_ios_per_sec": 0, 00:09:07.791 "rw_mbytes_per_sec": 0, 00:09:07.791 "r_mbytes_per_sec": 0, 00:09:07.791 "w_mbytes_per_sec": 0 00:09:07.791 }, 00:09:07.791 "claimed": false, 00:09:07.791 "zoned": false, 00:09:07.791 "supported_io_types": { 00:09:07.791 "read": true, 00:09:07.791 "write": true, 00:09:07.791 "unmap": true, 00:09:07.791 "flush": true, 00:09:07.791 "reset": true, 00:09:07.791 "nvme_admin": false, 00:09:07.791 "nvme_io": false, 00:09:07.791 "nvme_io_md": false, 00:09:07.791 "write_zeroes": true, 00:09:07.791 "zcopy": true, 00:09:07.791 "get_zone_info": false, 00:09:07.791 "zone_management": false, 00:09:07.791 "zone_append": false, 00:09:07.791 "compare": false, 00:09:07.791 "compare_and_write": false, 00:09:07.791 "abort": true, 00:09:07.791 "seek_hole": false, 00:09:07.791 "seek_data": false, 00:09:07.791 "copy": true, 00:09:07.791 "nvme_iov_md": false 00:09:07.791 }, 00:09:07.791 "memory_domains": [ 00:09:07.791 { 00:09:07.791 "dma_device_id": "system", 00:09:07.791 "dma_device_type": 1 00:09:07.791 }, 00:09:07.791 { 00:09:07.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.791 "dma_device_type": 2 00:09:07.791 } 00:09:07.791 ], 00:09:07.791 "driver_specific": {} 00:09:07.791 } 00:09:07.791 ] 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@907 -- # return 0 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:09:07.791 10:24:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:07.791 Running I/O for 5 seconds... 00:09:09.687 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:09:09.688 "tick_rate": 2700000000, 00:09:09.688 "ticks": 15375024884671668, 00:09:09.688 "bdevs": [ 00:09:09.688 { 00:09:09.688 "name": "Malloc_QD", 00:09:09.688 "bytes_read": 947958272, 00:09:09.688 "num_read_ops": 231428, 00:09:09.688 "bytes_written": 0, 00:09:09.688 "num_write_ops": 0, 00:09:09.688 "bytes_unmapped": 0, 00:09:09.688 "num_unmap_ops": 0, 00:09:09.688 "bytes_copied": 0, 00:09:09.688 "num_copy_ops": 0, 00:09:09.688 "read_latency_ticks": 2648780447638, 00:09:09.688 "max_read_latency_ticks": 13104015, 00:09:09.688 "min_read_latency_ticks": 375840, 00:09:09.688 "write_latency_ticks": 0, 00:09:09.688 "max_write_latency_ticks": 0, 00:09:09.688 "min_write_latency_ticks": 0, 00:09:09.688 "unmap_latency_ticks": 0, 00:09:09.688 "max_unmap_latency_ticks": 0, 00:09:09.688 "min_unmap_latency_ticks": 0, 00:09:09.688 "copy_latency_ticks": 0, 00:09:09.688 "max_copy_latency_ticks": 0, 00:09:09.688 "min_copy_latency_ticks": 0, 00:09:09.688 "io_error": {}, 00:09:09.688 "queue_depth_polling_period": 10, 00:09:09.688 "queue_depth": 512, 00:09:09.688 "io_time": 40, 00:09:09.688 "weighted_io_time": 20480 00:09:09.688 } 00:09:09.688 ] 00:09:09.688 }' 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.688 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:09.688 00:09:09.688 Latency(us) 00:09:09.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.688 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:09.688 Malloc_QD : 1.98 60307.34 235.58 0.00 0.00 4234.51 1219.70 4636.07 00:09:09.688 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:09.688 Malloc_QD : 1.98 60538.77 236.48 0.00 0.00 4218.72 861.68 4854.52 00:09:09.688 =================================================================================================================== 00:09:09.688 Total : 120846.11 472.06 0.00 0.00 4226.60 861.68 4854.52 00:09:09.945 0 00:09:09.945 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.945 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 2328819 00:09:09.945 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # '[' -z 2328819 ']' 00:09:09.945 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # kill -0 2328819 00:09:09.945 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # uname 00:09:09.945 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:09.945 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2328819 00:09:09.945 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:09.945 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:09.945 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2328819' 00:09:09.945 killing process with pid 2328819 00:09:09.945 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@969 -- # kill 2328819 00:09:09.945 Received shutdown signal, test time was about 2.044953 seconds 00:09:09.945 00:09:09.945 Latency(us) 00:09:09.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.945 =================================================================================================================== 00:09:09.945 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:09.945 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@974 -- # wait 2328819 00:09:10.203 10:24:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:09:10.203 00:09:10.203 real 0m2.902s 00:09:10.203 user 0m5.471s 00:09:10.203 sys 0m0.350s 00:09:10.203 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.203 10:24:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:10.203 ************************************ 00:09:10.203 END TEST bdev_qd_sampling 00:09:10.203 ************************************ 00:09:10.203 10:24:13 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:09:10.203 10:24:13 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:10.203 10:24:13 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.203 10:24:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:10.203 ************************************ 00:09:10.203 START TEST bdev_error 00:09:10.203 ************************************ 00:09:10.203 10:24:13 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # error_test_suite '' 00:09:10.203 10:24:13 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:09:10.203 10:24:13 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:09:10.203 10:24:13 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:09:10.203 10:24:13 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=2329128 00:09:10.203 10:24:13 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:09:10.203 10:24:13 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 2329128' 00:09:10.203 Process error testing pid: 2329128 00:09:10.203 10:24:13 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 2329128 00:09:10.203 10:24:13 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 2329128 ']' 00:09:10.203 10:24:13 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.203 10:24:13 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.203 10:24:13 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.203 10:24:13 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.203 10:24:13 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:10.203 [2024-07-25 10:24:13.814994] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:10.203 [2024-07-25 10:24:13.815064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329128 ] 00:09:10.203 [2024-07-25 10:24:13.891557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.460 [2024-07-25 10:24:14.003413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:09:11.392 10:24:14 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:11.392 Dev_1 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.392 10:24:14 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:11.392 [ 00:09:11.392 { 00:09:11.392 "name": "Dev_1", 00:09:11.392 "aliases": [ 00:09:11.392 "5b2301f6-23fd-4459-904f-17b8311ea0ab" 00:09:11.392 ], 00:09:11.392 "product_name": "Malloc disk", 00:09:11.392 "block_size": 512, 00:09:11.392 "num_blocks": 262144, 00:09:11.392 "uuid": "5b2301f6-23fd-4459-904f-17b8311ea0ab", 00:09:11.392 "assigned_rate_limits": { 00:09:11.392 "rw_ios_per_sec": 0, 00:09:11.392 "rw_mbytes_per_sec": 0, 00:09:11.392 "r_mbytes_per_sec": 0, 00:09:11.392 "w_mbytes_per_sec": 0 00:09:11.392 }, 00:09:11.392 "claimed": false, 00:09:11.392 "zoned": false, 00:09:11.392 "supported_io_types": { 00:09:11.392 "read": true, 00:09:11.392 "write": true, 00:09:11.392 "unmap": true, 00:09:11.392 "flush": true, 00:09:11.392 "reset": true, 00:09:11.392 "nvme_admin": false, 00:09:11.392 "nvme_io": false, 00:09:11.392 "nvme_io_md": false, 00:09:11.392 "write_zeroes": true, 00:09:11.392 "zcopy": true, 00:09:11.392 "get_zone_info": false, 00:09:11.392 "zone_management": false, 00:09:11.392 "zone_append": false, 00:09:11.392 "compare": false, 00:09:11.392 "compare_and_write": false, 00:09:11.392 "abort": true, 00:09:11.392 "seek_hole": false, 00:09:11.392 "seek_data": false, 00:09:11.392 "copy": true, 00:09:11.392 "nvme_iov_md": false 00:09:11.392 }, 00:09:11.392 "memory_domains": [ 00:09:11.392 { 00:09:11.392 "dma_device_id": "system", 00:09:11.392 "dma_device_type": 1 00:09:11.392 }, 00:09:11.392 { 00:09:11.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.392 "dma_device_type": 2 00:09:11.392 } 00:09:11.392 ], 00:09:11.392 "driver_specific": {} 00:09:11.392 } 00:09:11.392 ] 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:09:11.392 10:24:14 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:11.392 true 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.392 10:24:14 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:11.392 Dev_2 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.392 10:24:14 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.392 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:11.393 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.393 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:11.393 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.393 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:11.393 [ 00:09:11.393 { 00:09:11.393 "name": "Dev_2", 00:09:11.393 "aliases": [ 00:09:11.393 "a221cffd-ead8-4d14-9cfc-f1e1b8e78c5c" 00:09:11.393 ], 00:09:11.393 "product_name": "Malloc disk", 00:09:11.393 "block_size": 512, 00:09:11.393 "num_blocks": 262144, 00:09:11.393 "uuid": "a221cffd-ead8-4d14-9cfc-f1e1b8e78c5c", 00:09:11.393 "assigned_rate_limits": { 00:09:11.393 "rw_ios_per_sec": 0, 00:09:11.393 "rw_mbytes_per_sec": 0, 00:09:11.393 "r_mbytes_per_sec": 0, 00:09:11.393 "w_mbytes_per_sec": 0 00:09:11.393 }, 00:09:11.393 "claimed": false, 00:09:11.393 "zoned": false, 00:09:11.393 "supported_io_types": { 00:09:11.393 "read": true, 00:09:11.393 "write": true, 00:09:11.393 "unmap": true, 00:09:11.393 "flush": true, 00:09:11.393 "reset": true, 00:09:11.393 "nvme_admin": false, 00:09:11.393 "nvme_io": false, 00:09:11.393 "nvme_io_md": false, 00:09:11.393 "write_zeroes": true, 00:09:11.393 "zcopy": true, 00:09:11.393 "get_zone_info": false, 00:09:11.393 "zone_management": false, 00:09:11.393 "zone_append": false, 00:09:11.393 "compare": false, 00:09:11.393 "compare_and_write": false, 00:09:11.393 "abort": true, 00:09:11.393 "seek_hole": false, 00:09:11.393 "seek_data": false, 00:09:11.393 "copy": true, 00:09:11.393 "nvme_iov_md": false 00:09:11.393 }, 00:09:11.393 "memory_domains": [ 00:09:11.393 { 00:09:11.393 "dma_device_id": "system", 00:09:11.393 "dma_device_type": 1 00:09:11.393 }, 00:09:11.393 { 00:09:11.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.393 "dma_device_type": 2 00:09:11.393 } 00:09:11.393 ], 00:09:11.393 "driver_specific": {} 00:09:11.393 } 00:09:11.393 ] 00:09:11.393 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.393 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:09:11.393 10:24:14 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:11.393 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.393 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:11.393 10:24:14 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.393 10:24:14 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:09:11.393 10:24:14 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:11.393 Running I/O for 5 seconds... 00:09:12.326 10:24:15 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 2329128 00:09:12.326 10:24:15 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 2329128' 00:09:12.326 Process is existed as continue on error is set. Pid: 2329128 00:09:12.326 10:24:15 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:09:12.326 10:24:15 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.326 10:24:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:12.326 10:24:15 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.326 10:24:15 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:09:12.326 10:24:15 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.326 10:24:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:12.326 10:24:15 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.326 10:24:15 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:09:12.326 Timeout while waiting for response: 00:09:12.326 00:09:12.326 00:09:16.505 00:09:16.505 Latency(us) 00:09:16.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.505 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:16.505 EE_Dev_1 : 0.90 38748.12 151.36 5.53 0.00 409.44 144.12 722.11 00:09:16.505 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:16.505 Dev_2 : 5.00 82895.06 323.81 0.00 0.00 189.65 65.23 28544.57 00:09:16.505 =================================================================================================================== 00:09:16.505 Total : 121643.17 475.17 5.53 0.00 206.78 65.23 28544.57 00:09:17.494 10:24:20 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 2329128 00:09:17.494 10:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # '[' -z 2329128 ']' 00:09:17.494 10:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # kill -0 2329128 00:09:17.494 10:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # uname 00:09:17.494 10:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.494 10:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2329128 00:09:17.494 10:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:17.494 10:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:17.494 10:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2329128' 00:09:17.494 killing process with pid 2329128 00:09:17.494 10:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@969 -- # kill 2329128 00:09:17.494 Received shutdown signal, test time was about 5.000000 seconds 00:09:17.494 00:09:17.494 Latency(us) 00:09:17.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.494 =================================================================================================================== 00:09:17.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:17.494 10:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@974 -- # wait 2329128 00:09:17.752 10:24:21 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=2330053 00:09:17.752 10:24:21 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:09:17.752 10:24:21 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 2330053' 00:09:17.752 Process error testing pid: 2330053 00:09:17.752 10:24:21 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 2330053 00:09:17.752 10:24:21 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 2330053 ']' 00:09:17.752 10:24:21 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.752 10:24:21 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.752 10:24:21 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.752 10:24:21 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.752 10:24:21 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:17.752 [2024-07-25 10:24:21.379248] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:17.752 [2024-07-25 10:24:21.379340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330053 ] 00:09:17.752 [2024-07-25 10:24:21.460295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.010 [2024-07-25 10:24:21.580021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:09:18.945 10:24:22 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:18.945 Dev_1 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.945 10:24:22 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:18.945 [ 00:09:18.945 { 00:09:18.945 "name": "Dev_1", 00:09:18.945 "aliases": [ 00:09:18.945 "2783d178-b811-4c6f-abee-1715fd40a7c2" 00:09:18.945 ], 00:09:18.945 "product_name": "Malloc disk", 00:09:18.945 "block_size": 512, 00:09:18.945 "num_blocks": 262144, 00:09:18.945 "uuid": "2783d178-b811-4c6f-abee-1715fd40a7c2", 00:09:18.945 "assigned_rate_limits": { 00:09:18.945 "rw_ios_per_sec": 0, 00:09:18.945 "rw_mbytes_per_sec": 0, 00:09:18.945 "r_mbytes_per_sec": 0, 00:09:18.945 "w_mbytes_per_sec": 0 00:09:18.945 }, 00:09:18.945 "claimed": false, 00:09:18.945 "zoned": false, 00:09:18.945 "supported_io_types": { 00:09:18.945 "read": true, 00:09:18.945 "write": true, 00:09:18.945 "unmap": true, 00:09:18.945 "flush": true, 00:09:18.945 "reset": true, 00:09:18.945 "nvme_admin": false, 00:09:18.945 "nvme_io": false, 00:09:18.945 "nvme_io_md": false, 00:09:18.945 "write_zeroes": true, 00:09:18.945 "zcopy": true, 00:09:18.945 "get_zone_info": false, 00:09:18.945 "zone_management": false, 00:09:18.945 "zone_append": false, 00:09:18.945 "compare": false, 00:09:18.945 "compare_and_write": false, 00:09:18.945 "abort": true, 00:09:18.945 "seek_hole": false, 00:09:18.945 "seek_data": false, 00:09:18.945 "copy": true, 00:09:18.945 "nvme_iov_md": false 00:09:18.945 }, 00:09:18.945 "memory_domains": [ 00:09:18.945 { 00:09:18.945 "dma_device_id": "system", 00:09:18.945 "dma_device_type": 1 00:09:18.945 }, 00:09:18.945 { 00:09:18.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.945 "dma_device_type": 2 00:09:18.945 } 00:09:18.945 ], 00:09:18.945 "driver_specific": {} 00:09:18.945 } 00:09:18.945 ] 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:09:18.945 10:24:22 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:18.945 true 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.945 10:24:22 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:18.945 Dev_2 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.945 10:24:22 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:09:18.945 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:18.946 [ 00:09:18.946 { 00:09:18.946 "name": "Dev_2", 00:09:18.946 "aliases": [ 00:09:18.946 "d89958c4-a0e8-485c-b5e7-d1a46556ef5c" 00:09:18.946 ], 00:09:18.946 "product_name": "Malloc disk", 00:09:18.946 "block_size": 512, 00:09:18.946 "num_blocks": 262144, 00:09:18.946 "uuid": "d89958c4-a0e8-485c-b5e7-d1a46556ef5c", 00:09:18.946 "assigned_rate_limits": { 00:09:18.946 "rw_ios_per_sec": 0, 00:09:18.946 "rw_mbytes_per_sec": 0, 00:09:18.946 "r_mbytes_per_sec": 0, 00:09:18.946 "w_mbytes_per_sec": 0 00:09:18.946 }, 00:09:18.946 "claimed": false, 00:09:18.946 "zoned": false, 00:09:18.946 "supported_io_types": { 00:09:18.946 "read": true, 00:09:18.946 "write": true, 00:09:18.946 "unmap": true, 00:09:18.946 "flush": true, 00:09:18.946 "reset": true, 00:09:18.946 "nvme_admin": false, 00:09:18.946 "nvme_io": false, 00:09:18.946 "nvme_io_md": false, 00:09:18.946 "write_zeroes": true, 00:09:18.946 "zcopy": true, 00:09:18.946 "get_zone_info": false, 00:09:18.946 "zone_management": false, 00:09:18.946 "zone_append": false, 00:09:18.946 "compare": false, 00:09:18.946 "compare_and_write": false, 00:09:18.946 "abort": true, 00:09:18.946 "seek_hole": false, 00:09:18.946 "seek_data": false, 00:09:18.946 "copy": true, 00:09:18.946 "nvme_iov_md": false 00:09:18.946 }, 00:09:18.946 "memory_domains": [ 00:09:18.946 { 00:09:18.946 "dma_device_id": "system", 00:09:18.946 "dma_device_type": 1 00:09:18.946 }, 00:09:18.946 { 00:09:18.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.946 "dma_device_type": 2 00:09:18.946 } 00:09:18.946 ], 00:09:18.946 "driver_specific": {} 00:09:18.946 } 00:09:18.946 ] 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:09:18.946 10:24:22 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.946 10:24:22 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 2330053 00:09:18.946 10:24:22 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # local es=0 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2330053 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@638 -- # local arg=wait 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # type -t wait 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.946 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # wait 2330053 00:09:18.946 Running I/O for 5 seconds... 00:09:18.946 task offset: 27768 on job bdev=EE_Dev_1 fails 00:09:18.946 00:09:18.946 Latency(us) 00:09:18.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.946 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:18.946 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:09:18.946 EE_Dev_1 : 0.00 29023.75 113.37 6596.31 0.00 369.61 123.64 667.50 00:09:18.946 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:18.946 Dev_2 : 0.00 19013.67 74.27 0.00 0.00 617.58 135.02 1140.81 00:09:18.946 =================================================================================================================== 00:09:18.946 Total : 48037.41 187.65 6596.31 0.00 504.10 123.64 1140.81 00:09:18.946 [2024-07-25 10:24:22.562192] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:18.946 request: 00:09:18.946 { 00:09:18.946 "method": "perform_tests", 00:09:18.946 "req_id": 1 00:09:18.946 } 00:09:18.946 Got JSON-RPC error response 00:09:18.946 response: 00:09:18.946 { 00:09:18.946 "code": -32603, 00:09:18.946 "message": "bdevperf failed with error Operation not permitted" 00:09:18.946 } 00:09:19.513 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # es=255 00:09:19.513 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:19.513 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # es=127 00:09:19.513 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # case "$es" in 00:09:19.513 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@670 -- # es=1 00:09:19.513 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:19.513 00:09:19.513 real 0m9.156s 00:09:19.513 user 0m9.517s 00:09:19.513 sys 0m0.808s 00:09:19.513 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.513 10:24:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:19.513 ************************************ 00:09:19.513 END TEST bdev_error 00:09:19.513 ************************************ 00:09:19.513 10:24:22 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:09:19.513 10:24:22 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:19.513 10:24:22 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.513 10:24:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:19.513 ************************************ 00:09:19.513 START TEST bdev_stat 00:09:19.513 ************************************ 00:09:19.513 10:24:22 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # stat_test_suite '' 00:09:19.513 10:24:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:09:19.513 10:24:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=2330221 00:09:19.513 10:24:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:09:19.513 10:24:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 2330221' 00:09:19.513 Process Bdev IO statistics testing pid: 2330221 00:09:19.513 10:24:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:09:19.513 10:24:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 2330221 00:09:19.513 10:24:22 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # '[' -z 2330221 ']' 00:09:19.513 10:24:22 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.513 10:24:22 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.513 10:24:22 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.513 10:24:22 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.513 10:24:22 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:19.513 [2024-07-25 10:24:23.019490] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:19.513 [2024-07-25 10:24:23.019579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330221 ] 00:09:19.513 [2024-07-25 10:24:23.104212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:19.772 [2024-07-25 10:24:23.226259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.772 [2024-07-25 10:24:23.226264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.337 10:24:23 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.337 10:24:23 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # return 0 00:09:20.337 10:24:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:09:20.337 10:24:23 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.337 10:24:23 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:20.337 Malloc_STAT 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_STAT 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # local i 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:20.337 [ 00:09:20.337 { 00:09:20.337 "name": "Malloc_STAT", 00:09:20.337 "aliases": [ 00:09:20.337 "a225926f-db92-4012-b33c-f12c9629c262" 00:09:20.337 ], 00:09:20.337 "product_name": "Malloc disk", 00:09:20.337 "block_size": 512, 00:09:20.337 "num_blocks": 262144, 00:09:20.337 "uuid": "a225926f-db92-4012-b33c-f12c9629c262", 00:09:20.337 "assigned_rate_limits": { 00:09:20.337 "rw_ios_per_sec": 0, 00:09:20.337 "rw_mbytes_per_sec": 0, 00:09:20.337 "r_mbytes_per_sec": 0, 00:09:20.337 "w_mbytes_per_sec": 0 00:09:20.337 }, 00:09:20.337 "claimed": false, 00:09:20.337 "zoned": false, 00:09:20.337 "supported_io_types": { 00:09:20.337 "read": true, 00:09:20.337 "write": true, 00:09:20.337 "unmap": true, 00:09:20.337 "flush": true, 00:09:20.337 "reset": true, 00:09:20.337 "nvme_admin": false, 00:09:20.337 "nvme_io": false, 00:09:20.337 "nvme_io_md": false, 00:09:20.337 "write_zeroes": true, 00:09:20.337 "zcopy": true, 00:09:20.337 "get_zone_info": false, 00:09:20.337 "zone_management": false, 00:09:20.337 "zone_append": false, 00:09:20.337 "compare": false, 00:09:20.337 "compare_and_write": false, 00:09:20.337 "abort": true, 00:09:20.337 "seek_hole": false, 00:09:20.337 "seek_data": false, 00:09:20.337 "copy": true, 00:09:20.337 "nvme_iov_md": false 00:09:20.337 }, 00:09:20.337 "memory_domains": [ 00:09:20.337 { 00:09:20.337 "dma_device_id": "system", 00:09:20.337 "dma_device_type": 1 00:09:20.337 }, 00:09:20.337 { 00:09:20.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.337 "dma_device_type": 2 00:09:20.337 } 00:09:20.337 ], 00:09:20.337 "driver_specific": {} 00:09:20.337 } 00:09:20.337 ] 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- common/autotest_common.sh@907 -- # return 0 00:09:20.337 10:24:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:09:20.338 10:24:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:20.596 Running I/O for 10 seconds... 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:09:22.497 "tick_rate": 2700000000, 00:09:22.497 "ticks": 15375059163229640, 00:09:22.497 "bdevs": [ 00:09:22.497 { 00:09:22.497 "name": "Malloc_STAT", 00:09:22.497 "bytes_read": 931181056, 00:09:22.497 "num_read_ops": 227332, 00:09:22.497 "bytes_written": 0, 00:09:22.497 "num_write_ops": 0, 00:09:22.497 "bytes_unmapped": 0, 00:09:22.497 "num_unmap_ops": 0, 00:09:22.497 "bytes_copied": 0, 00:09:22.497 "num_copy_ops": 0, 00:09:22.497 "read_latency_ticks": 2615197540266, 00:09:22.497 "max_read_latency_ticks": 15520624, 00:09:22.497 "min_read_latency_ticks": 436892, 00:09:22.497 "write_latency_ticks": 0, 00:09:22.497 "max_write_latency_ticks": 0, 00:09:22.497 "min_write_latency_ticks": 0, 00:09:22.497 "unmap_latency_ticks": 0, 00:09:22.497 "max_unmap_latency_ticks": 0, 00:09:22.497 "min_unmap_latency_ticks": 0, 00:09:22.497 "copy_latency_ticks": 0, 00:09:22.497 "max_copy_latency_ticks": 0, 00:09:22.497 "min_copy_latency_ticks": 0, 00:09:22.497 "io_error": {} 00:09:22.497 } 00:09:22.497 ] 00:09:22.497 }' 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=227332 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:09:22.497 "tick_rate": 2700000000, 00:09:22.497 "ticks": 15375059297973379, 00:09:22.497 "name": "Malloc_STAT", 00:09:22.497 "channels": [ 00:09:22.497 { 00:09:22.497 "thread_id": 2, 00:09:22.497 "bytes_read": 476053504, 00:09:22.497 "num_read_ops": 116224, 00:09:22.497 "bytes_written": 0, 00:09:22.497 "num_write_ops": 0, 00:09:22.497 "bytes_unmapped": 0, 00:09:22.497 "num_unmap_ops": 0, 00:09:22.497 "bytes_copied": 0, 00:09:22.497 "num_copy_ops": 0, 00:09:22.497 "read_latency_ticks": 1340666314294, 00:09:22.497 "max_read_latency_ticks": 12830679, 00:09:22.497 "min_read_latency_ticks": 9946331, 00:09:22.497 "write_latency_ticks": 0, 00:09:22.497 "max_write_latency_ticks": 0, 00:09:22.497 "min_write_latency_ticks": 0, 00:09:22.497 "unmap_latency_ticks": 0, 00:09:22.497 "max_unmap_latency_ticks": 0, 00:09:22.497 "min_unmap_latency_ticks": 0, 00:09:22.497 "copy_latency_ticks": 0, 00:09:22.497 "max_copy_latency_ticks": 0, 00:09:22.497 "min_copy_latency_ticks": 0 00:09:22.497 }, 00:09:22.497 { 00:09:22.497 "thread_id": 3, 00:09:22.497 "bytes_read": 478150656, 00:09:22.497 "num_read_ops": 116736, 00:09:22.497 "bytes_written": 0, 00:09:22.497 "num_write_ops": 0, 00:09:22.497 "bytes_unmapped": 0, 00:09:22.497 "num_unmap_ops": 0, 00:09:22.497 "bytes_copied": 0, 00:09:22.497 "num_copy_ops": 0, 00:09:22.497 "read_latency_ticks": 1343031586197, 00:09:22.497 "max_read_latency_ticks": 15520624, 00:09:22.497 "min_read_latency_ticks": 10077981, 00:09:22.497 "write_latency_ticks": 0, 00:09:22.497 "max_write_latency_ticks": 0, 00:09:22.497 "min_write_latency_ticks": 0, 00:09:22.497 "unmap_latency_ticks": 0, 00:09:22.497 "max_unmap_latency_ticks": 0, 00:09:22.497 "min_unmap_latency_ticks": 0, 00:09:22.497 "copy_latency_ticks": 0, 00:09:22.497 "max_copy_latency_ticks": 0, 00:09:22.497 "min_copy_latency_ticks": 0 00:09:22.497 } 00:09:22.497 ] 00:09:22.497 }' 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=116224 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=116224 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=116736 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=232960 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:09:22.497 "tick_rate": 2700000000, 00:09:22.497 "ticks": 15375059540314023, 00:09:22.497 "bdevs": [ 00:09:22.497 { 00:09:22.497 "name": "Malloc_STAT", 00:09:22.497 "bytes_read": 996192768, 00:09:22.497 "num_read_ops": 243204, 00:09:22.497 "bytes_written": 0, 00:09:22.497 "num_write_ops": 0, 00:09:22.497 "bytes_unmapped": 0, 00:09:22.497 "num_unmap_ops": 0, 00:09:22.497 "bytes_copied": 0, 00:09:22.497 "num_copy_ops": 0, 00:09:22.497 "read_latency_ticks": 2807080517218, 00:09:22.497 "max_read_latency_ticks": 15520624, 00:09:22.497 "min_read_latency_ticks": 436892, 00:09:22.497 "write_latency_ticks": 0, 00:09:22.497 "max_write_latency_ticks": 0, 00:09:22.497 "min_write_latency_ticks": 0, 00:09:22.497 "unmap_latency_ticks": 0, 00:09:22.497 "max_unmap_latency_ticks": 0, 00:09:22.497 "min_unmap_latency_ticks": 0, 00:09:22.497 "copy_latency_ticks": 0, 00:09:22.497 "max_copy_latency_ticks": 0, 00:09:22.497 "min_copy_latency_ticks": 0, 00:09:22.497 "io_error": {} 00:09:22.497 } 00:09:22.497 ] 00:09:22.497 }' 00:09:22.497 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=243204 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 232960 -lt 227332 ']' 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 232960 -gt 243204 ']' 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:22.756 00:09:22.756 Latency(us) 00:09:22.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.756 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:22.756 Malloc_STAT : 2.10 59606.92 232.84 0.00 0.00 4284.49 1219.70 4757.43 00:09:22.756 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:22.756 Malloc_STAT : 2.10 59803.87 233.61 0.00 0.00 4270.45 885.95 5752.60 00:09:22.756 =================================================================================================================== 00:09:22.756 Total : 119410.79 466.45 0.00 0.00 4277.45 885.95 5752.60 00:09:22.756 0 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 2330221 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # '[' -z 2330221 ']' 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # kill -0 2330221 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # uname 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2330221 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2330221' 00:09:22.756 killing process with pid 2330221 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@969 -- # kill 2330221 00:09:22.756 Received shutdown signal, test time was about 2.160525 seconds 00:09:22.756 00:09:22.756 Latency(us) 00:09:22.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.756 =================================================================================================================== 00:09:22.756 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:22.756 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@974 -- # wait 2330221 00:09:23.015 10:24:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:09:23.015 00:09:23.015 real 0m3.605s 00:09:23.015 user 0m7.174s 00:09:23.015 sys 0m0.394s 00:09:23.015 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.015 10:24:26 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:23.015 ************************************ 00:09:23.015 END TEST bdev_stat 00:09:23.015 ************************************ 00:09:23.015 10:24:26 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:09:23.015 10:24:26 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:09:23.015 10:24:26 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:23.015 10:24:26 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:09:23.015 10:24:26 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:09:23.015 10:24:26 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:23.015 10:24:26 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:09:23.015 10:24:26 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:09:23.015 10:24:26 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:09:23.015 10:24:26 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:09:23.015 00:09:23.015 real 1m53.795s 00:09:23.015 user 7m0.351s 00:09:23.015 sys 0m20.114s 00:09:23.015 10:24:26 blockdev_general -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.015 10:24:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:23.015 ************************************ 00:09:23.015 END TEST blockdev_general 00:09:23.015 ************************************ 00:09:23.015 10:24:26 -- spdk/autotest.sh@194 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:23.015 10:24:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:23.015 10:24:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.015 10:24:26 -- common/autotest_common.sh@10 -- # set +x 00:09:23.015 ************************************ 00:09:23.015 START TEST bdev_raid 00:09:23.015 ************************************ 00:09:23.015 10:24:26 bdev_raid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:23.015 * Looking for test storage... 00:09:23.015 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:23.015 10:24:26 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:23.015 10:24:26 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:23.015 10:24:26 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:09:23.015 10:24:26 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:09:23.015 10:24:26 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:09:23.015 10:24:26 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:09:23.015 10:24:26 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:09:23.015 10:24:26 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:09:23.015 10:24:26 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:09:23.274 10:24:26 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:09:23.274 10:24:26 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:09:23.274 10:24:26 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:23.274 10:24:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:23.274 10:24:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.274 10:24:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.274 ************************************ 00:09:23.274 START TEST raid_function_test_raid0 00:09:23.274 ************************************ 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=2330823 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 2330823' 00:09:23.274 Process raid pid: 2330823 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 2330823 /var/tmp/spdk-raid.sock 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 2330823 ']' 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:23.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.274 10:24:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:23.274 [2024-07-25 10:24:26.804646] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:23.274 [2024-07-25 10:24:26.804717] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.274 [2024-07-25 10:24:26.883476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.568 [2024-07-25 10:24:26.993534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.568 [2024-07-25 10:24:27.061193] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.568 [2024-07-25 10:24:27.061231] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.136 10:24:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.136 10:24:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:09:24.136 10:24:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:09:24.136 10:24:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:09:24.136 10:24:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:24.136 10:24:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:09:24.136 10:24:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:24.395 [2024-07-25 10:24:28.023700] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:24.395 [2024-07-25 10:24:28.024964] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:24.395 [2024-07-25 10:24:28.025034] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2749750 00:09:24.395 [2024-07-25 10:24:28.025050] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:24.395 [2024-07-25 10:24:28.025253] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2749690 00:09:24.395 [2024-07-25 10:24:28.025408] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2749750 00:09:24.395 [2024-07-25 10:24:28.025424] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x2749750 00:09:24.395 [2024-07-25 10:24:28.025529] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.395 Base_1 00:09:24.395 Base_2 00:09:24.395 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:24.395 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:24.395 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:09:24.653 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:09:24.653 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:09:24.653 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:24.653 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:24.653 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:24.653 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:24.653 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:24.653 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:24.653 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:24.653 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:24.653 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:24.653 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:24.912 [2024-07-25 10:24:28.525080] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2749690 00:09:24.912 /dev/nbd0 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:24.912 1+0 records in 00:09:24.912 1+0 records out 00:09:24.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253669 s, 16.1 MB/s 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:24.912 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:25.170 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:25.170 { 00:09:25.170 "nbd_device": "/dev/nbd0", 00:09:25.170 "bdev_name": "raid" 00:09:25.170 } 00:09:25.170 ]' 00:09:25.170 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:25.170 { 00:09:25.170 "nbd_device": "/dev/nbd0", 00:09:25.170 "bdev_name": "raid" 00:09:25.170 } 00:09:25.170 ]' 00:09:25.170 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:25.170 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:25.170 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:25.170 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:25.170 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:25.170 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:25.170 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:09:25.170 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:09:25.170 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:09:25.171 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:25.429 4096+0 records in 00:09:25.429 4096+0 records out 00:09:25.429 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0165998 s, 126 MB/s 00:09:25.429 10:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:25.687 4096+0 records in 00:09:25.687 4096+0 records out 00:09:25.687 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.269289 s, 7.8 MB/s 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:25.687 128+0 records in 00:09:25.687 128+0 records out 00:09:25.687 65536 bytes (66 kB, 64 KiB) copied, 0.000279709 s, 234 MB/s 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:25.687 2035+0 records in 00:09:25.687 2035+0 records out 00:09:25.687 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00376428 s, 277 MB/s 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:25.687 456+0 records in 00:09:25.687 456+0 records out 00:09:25.687 233472 bytes (233 kB, 228 KiB) copied, 0.000821327 s, 284 MB/s 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:25.687 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:25.946 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:25.946 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:25.946 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:25.946 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:25.946 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:25.946 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:25.946 [2024-07-25 10:24:29.472570] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.946 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:25.946 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:25.946 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:25.946 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:25.946 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 2330823 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 2330823 ']' 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 2330823 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2330823 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2330823' 00:09:26.204 killing process with pid 2330823 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 2330823 00:09:26.204 [2024-07-25 10:24:29.788985] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.204 10:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 2330823 00:09:26.204 [2024-07-25 10:24:29.789073] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.204 [2024-07-25 10:24:29.789159] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.204 [2024-07-25 10:24:29.789175] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2749750 name raid, state offline 00:09:26.204 [2024-07-25 10:24:29.809869] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.462 10:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:09:26.462 00:09:26.462 real 0m3.323s 00:09:26.462 user 0m4.532s 00:09:26.462 sys 0m0.984s 00:09:26.462 10:24:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.462 10:24:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:26.462 ************************************ 00:09:26.462 END TEST raid_function_test_raid0 00:09:26.462 ************************************ 00:09:26.462 10:24:30 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:09:26.462 10:24:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:26.462 10:24:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.462 10:24:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.462 ************************************ 00:09:26.462 START TEST raid_function_test_concat 00:09:26.462 ************************************ 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=2331317 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 2331317' 00:09:26.462 Process raid pid: 2331317 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 2331317 /var/tmp/spdk-raid.sock 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 2331317 ']' 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:26.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.462 10:24:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:26.721 [2024-07-25 10:24:30.173118] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:26.721 [2024-07-25 10:24:30.173207] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.721 [2024-07-25 10:24:30.277028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.721 [2024-07-25 10:24:30.410917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.979 [2024-07-25 10:24:30.487792] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.979 [2024-07-25 10:24:30.487836] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.979 10:24:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.979 10:24:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:09:26.979 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:09:26.979 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:09:26.979 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:26.980 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:09:26.980 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:27.238 [2024-07-25 10:24:30.807779] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:27.238 [2024-07-25 10:24:30.808803] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:27.238 [2024-07-25 10:24:30.808853] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x18b2750 00:09:27.238 [2024-07-25 10:24:30.808866] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:27.238 [2024-07-25 10:24:30.809010] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18b2690 00:09:27.238 [2024-07-25 10:24:30.809157] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18b2750 00:09:27.238 [2024-07-25 10:24:30.809170] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x18b2750 00:09:27.238 [2024-07-25 10:24:30.809253] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.238 Base_1 00:09:27.238 Base_2 00:09:27.238 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:27.238 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:27.238 10:24:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:09:27.495 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:09:27.495 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:09:27.495 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:27.495 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:27.495 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:27.495 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:27.495 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:27.496 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:27.496 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:27.496 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:27.496 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:27.496 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:27.753 [2024-07-25 10:24:31.297125] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18b2690 00:09:27.753 /dev/nbd0 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.753 1+0 records in 00:09:27.753 1+0 records out 00:09:27.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234319 s, 17.5 MB/s 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:27.753 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:28.011 { 00:09:28.011 "nbd_device": "/dev/nbd0", 00:09:28.011 "bdev_name": "raid" 00:09:28.011 } 00:09:28.011 ]' 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:28.011 { 00:09:28.011 "nbd_device": "/dev/nbd0", 00:09:28.011 "bdev_name": "raid" 00:09:28.011 } 00:09:28.011 ]' 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:28.011 4096+0 records in 00:09:28.011 4096+0 records out 00:09:28.011 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0166578 s, 126 MB/s 00:09:28.011 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:28.269 4096+0 records in 00:09:28.269 4096+0 records out 00:09:28.269 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.231423 s, 9.1 MB/s 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:28.269 128+0 records in 00:09:28.269 128+0 records out 00:09:28.269 65536 bytes (66 kB, 64 KiB) copied, 0.00023952 s, 274 MB/s 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:28.269 2035+0 records in 00:09:28.269 2035+0 records out 00:09:28.269 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0031053 s, 336 MB/s 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:28.269 456+0 records in 00:09:28.269 456+0 records out 00:09:28.269 233472 bytes (233 kB, 228 KiB) copied, 0.000629024 s, 371 MB/s 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.269 10:24:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:28.528 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:28.528 [2024-07-25 10:24:32.194813] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.528 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:28.528 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:28.528 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.528 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.528 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:28.528 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:28.528 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.528 10:24:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:28.528 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:28.528 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 2331317 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 2331317 ']' 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 2331317 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.786 10:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2331317 00:09:29.044 10:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:29.044 10:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:29.044 10:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2331317' 00:09:29.044 killing process with pid 2331317 00:09:29.044 10:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 2331317 00:09:29.044 [2024-07-25 10:24:32.514594] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.044 [2024-07-25 10:24:32.514671] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.044 [2024-07-25 10:24:32.514716] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.044 [2024-07-25 10:24:32.514730] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18b2750 name raid, state offline 00:09:29.044 10:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 2331317 00:09:29.044 [2024-07-25 10:24:32.535858] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.302 10:24:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:09:29.302 00:09:29.302 real 0m2.672s 00:09:29.302 user 0m3.745s 00:09:29.302 sys 0m0.965s 00:09:29.302 10:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.302 10:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:29.302 ************************************ 00:09:29.302 END TEST raid_function_test_concat 00:09:29.302 ************************************ 00:09:29.302 10:24:32 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:09:29.302 10:24:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:29.302 10:24:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.302 10:24:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.302 ************************************ 00:09:29.302 START TEST raid0_resize_test 00:09:29.302 ************************************ 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid0_resize_test 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=2331667 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 2331667' 00:09:29.302 Process raid pid: 2331667 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 2331667 /var/tmp/spdk-raid.sock 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 2331667 ']' 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:29.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.302 10:24:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.302 [2024-07-25 10:24:32.895237] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:29.302 [2024-07-25 10:24:32.895303] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.303 [2024-07-25 10:24:32.973763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.561 [2024-07-25 10:24:33.086531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.561 [2024-07-25 10:24:33.161193] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.561 [2024-07-25 10:24:33.161233] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.496 10:24:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.496 10:24:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:09:30.496 10:24:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:09:30.496 Base_1 00:09:30.496 10:24:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:09:30.754 Base_2 00:09:30.754 10:24:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:09:31.012 [2024-07-25 10:24:34.561977] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:31.012 [2024-07-25 10:24:34.563400] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:31.012 [2024-07-25 10:24:34.563472] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xe9d900 00:09:31.012 [2024-07-25 10:24:34.563485] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:31.012 [2024-07-25 10:24:34.563706] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1047bd0 00:09:31.012 [2024-07-25 10:24:34.563824] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe9d900 00:09:31.012 [2024-07-25 10:24:34.563837] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0xe9d900 00:09:31.012 [2024-07-25 10:24:34.563981] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.012 10:24:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:09:31.270 [2024-07-25 10:24:34.798596] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:31.270 [2024-07-25 10:24:34.798622] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:31.270 true 00:09:31.270 10:24:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:31.270 10:24:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:09:31.528 [2024-07-25 10:24:35.055447] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.528 10:24:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:09:31.528 10:24:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:09:31.528 10:24:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:09:31.528 10:24:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:09:31.786 [2024-07-25 10:24:35.299906] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:31.786 [2024-07-25 10:24:35.299931] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:31.786 [2024-07-25 10:24:35.299957] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:31.786 true 00:09:31.786 10:24:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:31.786 10:24:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:09:32.044 [2024-07-25 10:24:35.552734] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 2331667 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 2331667 ']' 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 2331667 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2331667 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2331667' 00:09:32.044 killing process with pid 2331667 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 2331667 00:09:32.044 [2024-07-25 10:24:35.596819] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.044 10:24:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 2331667 00:09:32.044 [2024-07-25 10:24:35.596899] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.044 [2024-07-25 10:24:35.596958] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.044 [2024-07-25 10:24:35.596973] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe9d900 name Raid, state offline 00:09:32.044 [2024-07-25 10:24:35.598520] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.303 10:24:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:09:32.303 00:09:32.303 real 0m3.005s 00:09:32.303 user 0m4.702s 00:09:32.303 sys 0m0.542s 00:09:32.303 10:24:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.303 10:24:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.303 ************************************ 00:09:32.303 END TEST raid0_resize_test 00:09:32.303 ************************************ 00:09:32.303 10:24:35 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:09:32.303 10:24:35 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:32.303 10:24:35 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:32.303 10:24:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:32.303 10:24:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.303 10:24:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.303 ************************************ 00:09:32.303 START TEST raid_state_function_test 00:09:32.303 ************************************ 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2332100 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2332100' 00:09:32.303 Process raid pid: 2332100 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2332100 /var/tmp/spdk-raid.sock 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2332100 ']' 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:32.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.303 10:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.303 [2024-07-25 10:24:35.947882] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:32.303 [2024-07-25 10:24:35.947951] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.562 [2024-07-25 10:24:36.026064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.562 [2024-07-25 10:24:36.137608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.562 [2024-07-25 10:24:36.210959] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.562 [2024-07-25 10:24:36.211002] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.496 10:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.496 10:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:33.496 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:33.496 [2024-07-25 10:24:37.116223] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.496 [2024-07-25 10:24:37.116260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.496 [2024-07-25 10:24:37.116280] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.496 [2024-07-25 10:24:37.116291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.496 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:33.496 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:33.496 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:33.496 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:33.496 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:33.496 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:33.496 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:33.496 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:33.496 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:33.496 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:33.496 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:33.496 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.754 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:33.754 "name": "Existed_Raid", 00:09:33.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.754 "strip_size_kb": 64, 00:09:33.754 "state": "configuring", 00:09:33.754 "raid_level": "raid0", 00:09:33.754 "superblock": false, 00:09:33.754 "num_base_bdevs": 2, 00:09:33.754 "num_base_bdevs_discovered": 0, 00:09:33.754 "num_base_bdevs_operational": 2, 00:09:33.754 "base_bdevs_list": [ 00:09:33.754 { 00:09:33.754 "name": "BaseBdev1", 00:09:33.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.754 "is_configured": false, 00:09:33.754 "data_offset": 0, 00:09:33.754 "data_size": 0 00:09:33.754 }, 00:09:33.754 { 00:09:33.754 "name": "BaseBdev2", 00:09:33.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.754 "is_configured": false, 00:09:33.754 "data_offset": 0, 00:09:33.754 "data_size": 0 00:09:33.754 } 00:09:33.754 ] 00:09:33.754 }' 00:09:33.754 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:33.754 10:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.320 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:34.578 [2024-07-25 10:24:38.146843] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.578 [2024-07-25 10:24:38.146876] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a04600 name Existed_Raid, state configuring 00:09:34.578 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:34.836 [2024-07-25 10:24:38.395536] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.836 [2024-07-25 10:24:38.395571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.836 [2024-07-25 10:24:38.395593] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.836 [2024-07-25 10:24:38.395613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.836 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:35.094 [2024-07-25 10:24:38.644249] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.094 BaseBdev1 00:09:35.094 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:35.094 10:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:35.095 10:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.095 10:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.095 10:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.095 10:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.095 10:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:35.352 10:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.610 [ 00:09:35.610 { 00:09:35.610 "name": "BaseBdev1", 00:09:35.610 "aliases": [ 00:09:35.610 "858b55dd-fe1f-4d01-a9fa-b116fa96f9e1" 00:09:35.610 ], 00:09:35.610 "product_name": "Malloc disk", 00:09:35.610 "block_size": 512, 00:09:35.610 "num_blocks": 65536, 00:09:35.610 "uuid": "858b55dd-fe1f-4d01-a9fa-b116fa96f9e1", 00:09:35.610 "assigned_rate_limits": { 00:09:35.610 "rw_ios_per_sec": 0, 00:09:35.610 "rw_mbytes_per_sec": 0, 00:09:35.610 "r_mbytes_per_sec": 0, 00:09:35.610 "w_mbytes_per_sec": 0 00:09:35.610 }, 00:09:35.610 "claimed": true, 00:09:35.610 "claim_type": "exclusive_write", 00:09:35.610 "zoned": false, 00:09:35.610 "supported_io_types": { 00:09:35.610 "read": true, 00:09:35.610 "write": true, 00:09:35.610 "unmap": true, 00:09:35.610 "flush": true, 00:09:35.610 "reset": true, 00:09:35.610 "nvme_admin": false, 00:09:35.611 "nvme_io": false, 00:09:35.611 "nvme_io_md": false, 00:09:35.611 "write_zeroes": true, 00:09:35.611 "zcopy": true, 00:09:35.611 "get_zone_info": false, 00:09:35.611 "zone_management": false, 00:09:35.611 "zone_append": false, 00:09:35.611 "compare": false, 00:09:35.611 "compare_and_write": false, 00:09:35.611 "abort": true, 00:09:35.611 "seek_hole": false, 00:09:35.611 "seek_data": false, 00:09:35.611 "copy": true, 00:09:35.611 "nvme_iov_md": false 00:09:35.611 }, 00:09:35.611 "memory_domains": [ 00:09:35.611 { 00:09:35.611 "dma_device_id": "system", 00:09:35.611 "dma_device_type": 1 00:09:35.611 }, 00:09:35.611 { 00:09:35.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.611 "dma_device_type": 2 00:09:35.611 } 00:09:35.611 ], 00:09:35.611 "driver_specific": {} 00:09:35.611 } 00:09:35.611 ] 00:09:35.611 10:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.611 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:35.611 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:35.611 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:35.611 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:35.611 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:35.611 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:35.611 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:35.611 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:35.611 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:35.611 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:35.611 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.611 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.869 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:35.869 "name": "Existed_Raid", 00:09:35.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.869 "strip_size_kb": 64, 00:09:35.869 "state": "configuring", 00:09:35.869 "raid_level": "raid0", 00:09:35.869 "superblock": false, 00:09:35.869 "num_base_bdevs": 2, 00:09:35.869 "num_base_bdevs_discovered": 1, 00:09:35.869 "num_base_bdevs_operational": 2, 00:09:35.869 "base_bdevs_list": [ 00:09:35.869 { 00:09:35.869 "name": "BaseBdev1", 00:09:35.869 "uuid": "858b55dd-fe1f-4d01-a9fa-b116fa96f9e1", 00:09:35.869 "is_configured": true, 00:09:35.869 "data_offset": 0, 00:09:35.869 "data_size": 65536 00:09:35.869 }, 00:09:35.869 { 00:09:35.869 "name": "BaseBdev2", 00:09:35.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.869 "is_configured": false, 00:09:35.869 "data_offset": 0, 00:09:35.869 "data_size": 0 00:09:35.869 } 00:09:35.869 ] 00:09:35.869 }' 00:09:35.869 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:35.869 10:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.450 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:36.707 [2024-07-25 10:24:40.192370] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:36.707 [2024-07-25 10:24:40.192433] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a03e50 name Existed_Raid, state configuring 00:09:36.707 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:36.964 [2024-07-25 10:24:40.433021] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.964 [2024-07-25 10:24:40.434548] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.964 [2024-07-25 10:24:40.434583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.964 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.223 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:37.223 "name": "Existed_Raid", 00:09:37.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.223 "strip_size_kb": 64, 00:09:37.223 "state": "configuring", 00:09:37.223 "raid_level": "raid0", 00:09:37.223 "superblock": false, 00:09:37.223 "num_base_bdevs": 2, 00:09:37.223 "num_base_bdevs_discovered": 1, 00:09:37.223 "num_base_bdevs_operational": 2, 00:09:37.223 "base_bdevs_list": [ 00:09:37.223 { 00:09:37.223 "name": "BaseBdev1", 00:09:37.223 "uuid": "858b55dd-fe1f-4d01-a9fa-b116fa96f9e1", 00:09:37.223 "is_configured": true, 00:09:37.223 "data_offset": 0, 00:09:37.223 "data_size": 65536 00:09:37.223 }, 00:09:37.223 { 00:09:37.223 "name": "BaseBdev2", 00:09:37.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.223 "is_configured": false, 00:09:37.223 "data_offset": 0, 00:09:37.223 "data_size": 0 00:09:37.223 } 00:09:37.223 ] 00:09:37.223 }' 00:09:37.223 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:37.223 10:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.789 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:37.789 [2024-07-25 10:24:41.481991] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.789 [2024-07-25 10:24:41.482052] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a04b40 00:09:37.789 [2024-07-25 10:24:41.482063] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:37.789 [2024-07-25 10:24:41.482265] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19fe950 00:09:37.789 [2024-07-25 10:24:41.482427] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a04b40 00:09:37.789 [2024-07-25 10:24:41.482443] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1a04b40 00:09:37.789 [2024-07-25 10:24:41.482663] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.789 BaseBdev2 00:09:38.046 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:38.046 10:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:38.046 10:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.047 10:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:38.047 10:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.047 10:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.047 10:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:38.047 10:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.304 [ 00:09:38.304 { 00:09:38.304 "name": "BaseBdev2", 00:09:38.304 "aliases": [ 00:09:38.304 "679e03f8-6dbc-45a9-99af-e2975f7324d7" 00:09:38.304 ], 00:09:38.304 "product_name": "Malloc disk", 00:09:38.304 "block_size": 512, 00:09:38.304 "num_blocks": 65536, 00:09:38.304 "uuid": "679e03f8-6dbc-45a9-99af-e2975f7324d7", 00:09:38.304 "assigned_rate_limits": { 00:09:38.304 "rw_ios_per_sec": 0, 00:09:38.304 "rw_mbytes_per_sec": 0, 00:09:38.304 "r_mbytes_per_sec": 0, 00:09:38.304 "w_mbytes_per_sec": 0 00:09:38.304 }, 00:09:38.304 "claimed": true, 00:09:38.304 "claim_type": "exclusive_write", 00:09:38.304 "zoned": false, 00:09:38.304 "supported_io_types": { 00:09:38.304 "read": true, 00:09:38.304 "write": true, 00:09:38.304 "unmap": true, 00:09:38.304 "flush": true, 00:09:38.304 "reset": true, 00:09:38.304 "nvme_admin": false, 00:09:38.304 "nvme_io": false, 00:09:38.304 "nvme_io_md": false, 00:09:38.304 "write_zeroes": true, 00:09:38.304 "zcopy": true, 00:09:38.304 "get_zone_info": false, 00:09:38.304 "zone_management": false, 00:09:38.304 "zone_append": false, 00:09:38.304 "compare": false, 00:09:38.304 "compare_and_write": false, 00:09:38.304 "abort": true, 00:09:38.304 "seek_hole": false, 00:09:38.304 "seek_data": false, 00:09:38.304 "copy": true, 00:09:38.304 "nvme_iov_md": false 00:09:38.304 }, 00:09:38.304 "memory_domains": [ 00:09:38.304 { 00:09:38.304 "dma_device_id": "system", 00:09:38.304 "dma_device_type": 1 00:09:38.304 }, 00:09:38.304 { 00:09:38.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.304 "dma_device_type": 2 00:09:38.304 } 00:09:38.304 ], 00:09:38.304 "driver_specific": {} 00:09:38.304 } 00:09:38.304 ] 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:38.304 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.305 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.563 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:38.563 "name": "Existed_Raid", 00:09:38.563 "uuid": "3c3d7608-c6c1-4ff2-a376-8b5fef7695de", 00:09:38.563 "strip_size_kb": 64, 00:09:38.563 "state": "online", 00:09:38.563 "raid_level": "raid0", 00:09:38.563 "superblock": false, 00:09:38.563 "num_base_bdevs": 2, 00:09:38.563 "num_base_bdevs_discovered": 2, 00:09:38.563 "num_base_bdevs_operational": 2, 00:09:38.563 "base_bdevs_list": [ 00:09:38.563 { 00:09:38.563 "name": "BaseBdev1", 00:09:38.563 "uuid": "858b55dd-fe1f-4d01-a9fa-b116fa96f9e1", 00:09:38.563 "is_configured": true, 00:09:38.563 "data_offset": 0, 00:09:38.563 "data_size": 65536 00:09:38.563 }, 00:09:38.563 { 00:09:38.563 "name": "BaseBdev2", 00:09:38.563 "uuid": "679e03f8-6dbc-45a9-99af-e2975f7324d7", 00:09:38.563 "is_configured": true, 00:09:38.563 "data_offset": 0, 00:09:38.563 "data_size": 65536 00:09:38.563 } 00:09:38.563 ] 00:09:38.563 }' 00:09:38.563 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:38.563 10:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.128 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.128 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:39.128 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:39.128 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:39.128 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:39.128 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:39.128 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:39.128 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:39.386 [2024-07-25 10:24:43.006422] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.386 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:39.386 "name": "Existed_Raid", 00:09:39.386 "aliases": [ 00:09:39.386 "3c3d7608-c6c1-4ff2-a376-8b5fef7695de" 00:09:39.386 ], 00:09:39.386 "product_name": "Raid Volume", 00:09:39.386 "block_size": 512, 00:09:39.386 "num_blocks": 131072, 00:09:39.386 "uuid": "3c3d7608-c6c1-4ff2-a376-8b5fef7695de", 00:09:39.386 "assigned_rate_limits": { 00:09:39.386 "rw_ios_per_sec": 0, 00:09:39.386 "rw_mbytes_per_sec": 0, 00:09:39.386 "r_mbytes_per_sec": 0, 00:09:39.386 "w_mbytes_per_sec": 0 00:09:39.386 }, 00:09:39.386 "claimed": false, 00:09:39.386 "zoned": false, 00:09:39.386 "supported_io_types": { 00:09:39.386 "read": true, 00:09:39.386 "write": true, 00:09:39.386 "unmap": true, 00:09:39.386 "flush": true, 00:09:39.386 "reset": true, 00:09:39.386 "nvme_admin": false, 00:09:39.386 "nvme_io": false, 00:09:39.386 "nvme_io_md": false, 00:09:39.386 "write_zeroes": true, 00:09:39.386 "zcopy": false, 00:09:39.386 "get_zone_info": false, 00:09:39.386 "zone_management": false, 00:09:39.386 "zone_append": false, 00:09:39.386 "compare": false, 00:09:39.386 "compare_and_write": false, 00:09:39.386 "abort": false, 00:09:39.386 "seek_hole": false, 00:09:39.386 "seek_data": false, 00:09:39.386 "copy": false, 00:09:39.386 "nvme_iov_md": false 00:09:39.386 }, 00:09:39.386 "memory_domains": [ 00:09:39.386 { 00:09:39.386 "dma_device_id": "system", 00:09:39.386 "dma_device_type": 1 00:09:39.386 }, 00:09:39.386 { 00:09:39.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.386 "dma_device_type": 2 00:09:39.386 }, 00:09:39.386 { 00:09:39.386 "dma_device_id": "system", 00:09:39.386 "dma_device_type": 1 00:09:39.386 }, 00:09:39.386 { 00:09:39.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.386 "dma_device_type": 2 00:09:39.386 } 00:09:39.386 ], 00:09:39.386 "driver_specific": { 00:09:39.386 "raid": { 00:09:39.386 "uuid": "3c3d7608-c6c1-4ff2-a376-8b5fef7695de", 00:09:39.386 "strip_size_kb": 64, 00:09:39.386 "state": "online", 00:09:39.386 "raid_level": "raid0", 00:09:39.386 "superblock": false, 00:09:39.386 "num_base_bdevs": 2, 00:09:39.386 "num_base_bdevs_discovered": 2, 00:09:39.386 "num_base_bdevs_operational": 2, 00:09:39.386 "base_bdevs_list": [ 00:09:39.386 { 00:09:39.386 "name": "BaseBdev1", 00:09:39.386 "uuid": "858b55dd-fe1f-4d01-a9fa-b116fa96f9e1", 00:09:39.386 "is_configured": true, 00:09:39.386 "data_offset": 0, 00:09:39.386 "data_size": 65536 00:09:39.386 }, 00:09:39.386 { 00:09:39.386 "name": "BaseBdev2", 00:09:39.386 "uuid": "679e03f8-6dbc-45a9-99af-e2975f7324d7", 00:09:39.386 "is_configured": true, 00:09:39.386 "data_offset": 0, 00:09:39.386 "data_size": 65536 00:09:39.386 } 00:09:39.386 ] 00:09:39.386 } 00:09:39.386 } 00:09:39.386 }' 00:09:39.386 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.386 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:39.386 BaseBdev2' 00:09:39.386 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:39.386 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:39.386 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:39.645 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:39.645 "name": "BaseBdev1", 00:09:39.645 "aliases": [ 00:09:39.645 "858b55dd-fe1f-4d01-a9fa-b116fa96f9e1" 00:09:39.645 ], 00:09:39.645 "product_name": "Malloc disk", 00:09:39.645 "block_size": 512, 00:09:39.645 "num_blocks": 65536, 00:09:39.645 "uuid": "858b55dd-fe1f-4d01-a9fa-b116fa96f9e1", 00:09:39.645 "assigned_rate_limits": { 00:09:39.645 "rw_ios_per_sec": 0, 00:09:39.645 "rw_mbytes_per_sec": 0, 00:09:39.645 "r_mbytes_per_sec": 0, 00:09:39.645 "w_mbytes_per_sec": 0 00:09:39.645 }, 00:09:39.645 "claimed": true, 00:09:39.645 "claim_type": "exclusive_write", 00:09:39.645 "zoned": false, 00:09:39.645 "supported_io_types": { 00:09:39.645 "read": true, 00:09:39.645 "write": true, 00:09:39.645 "unmap": true, 00:09:39.645 "flush": true, 00:09:39.645 "reset": true, 00:09:39.645 "nvme_admin": false, 00:09:39.645 "nvme_io": false, 00:09:39.645 "nvme_io_md": false, 00:09:39.645 "write_zeroes": true, 00:09:39.645 "zcopy": true, 00:09:39.645 "get_zone_info": false, 00:09:39.645 "zone_management": false, 00:09:39.645 "zone_append": false, 00:09:39.645 "compare": false, 00:09:39.645 "compare_and_write": false, 00:09:39.645 "abort": true, 00:09:39.645 "seek_hole": false, 00:09:39.645 "seek_data": false, 00:09:39.645 "copy": true, 00:09:39.645 "nvme_iov_md": false 00:09:39.645 }, 00:09:39.645 "memory_domains": [ 00:09:39.645 { 00:09:39.645 "dma_device_id": "system", 00:09:39.645 "dma_device_type": 1 00:09:39.645 }, 00:09:39.645 { 00:09:39.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.645 "dma_device_type": 2 00:09:39.645 } 00:09:39.645 ], 00:09:39.645 "driver_specific": {} 00:09:39.645 }' 00:09:39.645 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:39.902 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:40.159 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:40.159 "name": "BaseBdev2", 00:09:40.159 "aliases": [ 00:09:40.159 "679e03f8-6dbc-45a9-99af-e2975f7324d7" 00:09:40.159 ], 00:09:40.159 "product_name": "Malloc disk", 00:09:40.159 "block_size": 512, 00:09:40.159 "num_blocks": 65536, 00:09:40.159 "uuid": "679e03f8-6dbc-45a9-99af-e2975f7324d7", 00:09:40.159 "assigned_rate_limits": { 00:09:40.159 "rw_ios_per_sec": 0, 00:09:40.159 "rw_mbytes_per_sec": 0, 00:09:40.159 "r_mbytes_per_sec": 0, 00:09:40.159 "w_mbytes_per_sec": 0 00:09:40.159 }, 00:09:40.159 "claimed": true, 00:09:40.159 "claim_type": "exclusive_write", 00:09:40.159 "zoned": false, 00:09:40.159 "supported_io_types": { 00:09:40.159 "read": true, 00:09:40.159 "write": true, 00:09:40.159 "unmap": true, 00:09:40.159 "flush": true, 00:09:40.159 "reset": true, 00:09:40.159 "nvme_admin": false, 00:09:40.159 "nvme_io": false, 00:09:40.159 "nvme_io_md": false, 00:09:40.159 "write_zeroes": true, 00:09:40.159 "zcopy": true, 00:09:40.159 "get_zone_info": false, 00:09:40.159 "zone_management": false, 00:09:40.159 "zone_append": false, 00:09:40.159 "compare": false, 00:09:40.159 "compare_and_write": false, 00:09:40.159 "abort": true, 00:09:40.159 "seek_hole": false, 00:09:40.159 "seek_data": false, 00:09:40.159 "copy": true, 00:09:40.159 "nvme_iov_md": false 00:09:40.159 }, 00:09:40.159 "memory_domains": [ 00:09:40.159 { 00:09:40.159 "dma_device_id": "system", 00:09:40.159 "dma_device_type": 1 00:09:40.159 }, 00:09:40.159 { 00:09:40.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.159 "dma_device_type": 2 00:09:40.159 } 00:09:40.159 ], 00:09:40.159 "driver_specific": {} 00:09:40.159 }' 00:09:40.159 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:40.417 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:40.417 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:40.417 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:40.417 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:40.417 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:40.417 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:40.417 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:40.417 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:40.417 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:40.417 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:40.675 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:40.675 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:40.675 [2024-07-25 10:24:44.373889] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:40.675 [2024-07-25 10:24:44.373917] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.675 [2024-07-25 10:24:44.373966] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.933 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.191 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:41.191 "name": "Existed_Raid", 00:09:41.191 "uuid": "3c3d7608-c6c1-4ff2-a376-8b5fef7695de", 00:09:41.191 "strip_size_kb": 64, 00:09:41.191 "state": "offline", 00:09:41.191 "raid_level": "raid0", 00:09:41.191 "superblock": false, 00:09:41.191 "num_base_bdevs": 2, 00:09:41.191 "num_base_bdevs_discovered": 1, 00:09:41.191 "num_base_bdevs_operational": 1, 00:09:41.191 "base_bdevs_list": [ 00:09:41.191 { 00:09:41.191 "name": null, 00:09:41.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.191 "is_configured": false, 00:09:41.191 "data_offset": 0, 00:09:41.191 "data_size": 65536 00:09:41.191 }, 00:09:41.191 { 00:09:41.191 "name": "BaseBdev2", 00:09:41.191 "uuid": "679e03f8-6dbc-45a9-99af-e2975f7324d7", 00:09:41.191 "is_configured": true, 00:09:41.191 "data_offset": 0, 00:09:41.191 "data_size": 65536 00:09:41.191 } 00:09:41.191 ] 00:09:41.191 }' 00:09:41.191 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:41.191 10:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.757 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:41.757 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:41.757 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.757 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:41.757 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:41.757 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.757 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:42.015 [2024-07-25 10:24:45.675535] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.015 [2024-07-25 10:24:45.675597] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a04b40 name Existed_Raid, state offline 00:09:42.015 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:42.015 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:42.015 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.015 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:42.273 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:42.273 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:42.273 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:42.273 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2332100 00:09:42.273 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2332100 ']' 00:09:42.273 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2332100 00:09:42.273 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:42.273 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.273 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2332100 00:09:42.533 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.533 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.533 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2332100' 00:09:42.533 killing process with pid 2332100 00:09:42.533 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2332100 00:09:42.533 [2024-07-25 10:24:45.990303] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.533 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2332100 00:09:42.533 [2024-07-25 10:24:45.991438] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:42.823 00:09:42.823 real 0m10.362s 00:09:42.823 user 0m18.697s 00:09:42.823 sys 0m1.494s 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.823 ************************************ 00:09:42.823 END TEST raid_state_function_test 00:09:42.823 ************************************ 00:09:42.823 10:24:46 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:42.823 10:24:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:42.823 10:24:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.823 10:24:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.823 ************************************ 00:09:42.823 START TEST raid_state_function_test_sb 00:09:42.823 ************************************ 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2333537 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2333537' 00:09:42.823 Process raid pid: 2333537 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2333537 /var/tmp/spdk-raid.sock 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2333537 ']' 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:42.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.823 10:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.823 [2024-07-25 10:24:46.363962] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:42.823 [2024-07-25 10:24:46.364056] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.823 [2024-07-25 10:24:46.447306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.103 [2024-07-25 10:24:46.567119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.103 [2024-07-25 10:24:46.640266] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.103 [2024-07-25 10:24:46.640308] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.669 10:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.669 10:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:43.669 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:43.926 [2024-07-25 10:24:47.523971] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.926 [2024-07-25 10:24:47.524017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.926 [2024-07-25 10:24:47.524028] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.926 [2024-07-25 10:24:47.524039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.926 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:43.926 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:43.926 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:43.926 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:43.926 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:43.926 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:43.926 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:43.926 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:43.926 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:43.926 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:43.926 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.926 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.183 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:44.183 "name": "Existed_Raid", 00:09:44.183 "uuid": "3259fb8b-0b8a-4531-a31b-dd7d6e6cfd27", 00:09:44.183 "strip_size_kb": 64, 00:09:44.183 "state": "configuring", 00:09:44.183 "raid_level": "raid0", 00:09:44.183 "superblock": true, 00:09:44.183 "num_base_bdevs": 2, 00:09:44.183 "num_base_bdevs_discovered": 0, 00:09:44.183 "num_base_bdevs_operational": 2, 00:09:44.183 "base_bdevs_list": [ 00:09:44.183 { 00:09:44.183 "name": "BaseBdev1", 00:09:44.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.183 "is_configured": false, 00:09:44.183 "data_offset": 0, 00:09:44.183 "data_size": 0 00:09:44.183 }, 00:09:44.183 { 00:09:44.183 "name": "BaseBdev2", 00:09:44.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.183 "is_configured": false, 00:09:44.183 "data_offset": 0, 00:09:44.183 "data_size": 0 00:09:44.183 } 00:09:44.183 ] 00:09:44.183 }' 00:09:44.183 10:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:44.183 10:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.745 10:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:45.003 [2024-07-25 10:24:48.558624] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.003 [2024-07-25 10:24:48.558659] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17ee600 name Existed_Raid, state configuring 00:09:45.003 10:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:45.260 [2024-07-25 10:24:48.791260] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.260 [2024-07-25 10:24:48.791295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.260 [2024-07-25 10:24:48.791318] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.260 [2024-07-25 10:24:48.791330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.260 10:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:45.518 [2024-07-25 10:24:49.060375] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.518 BaseBdev1 00:09:45.518 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:45.518 10:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:45.518 10:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.518 10:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:45.518 10:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.518 10:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.518 10:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:45.776 10:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:46.033 [ 00:09:46.033 { 00:09:46.033 "name": "BaseBdev1", 00:09:46.033 "aliases": [ 00:09:46.033 "ab1d3293-318d-46f3-ab8f-9d31914e4fed" 00:09:46.033 ], 00:09:46.033 "product_name": "Malloc disk", 00:09:46.033 "block_size": 512, 00:09:46.033 "num_blocks": 65536, 00:09:46.033 "uuid": "ab1d3293-318d-46f3-ab8f-9d31914e4fed", 00:09:46.033 "assigned_rate_limits": { 00:09:46.033 "rw_ios_per_sec": 0, 00:09:46.033 "rw_mbytes_per_sec": 0, 00:09:46.033 "r_mbytes_per_sec": 0, 00:09:46.033 "w_mbytes_per_sec": 0 00:09:46.033 }, 00:09:46.033 "claimed": true, 00:09:46.033 "claim_type": "exclusive_write", 00:09:46.033 "zoned": false, 00:09:46.033 "supported_io_types": { 00:09:46.033 "read": true, 00:09:46.033 "write": true, 00:09:46.033 "unmap": true, 00:09:46.033 "flush": true, 00:09:46.033 "reset": true, 00:09:46.033 "nvme_admin": false, 00:09:46.033 "nvme_io": false, 00:09:46.033 "nvme_io_md": false, 00:09:46.033 "write_zeroes": true, 00:09:46.033 "zcopy": true, 00:09:46.033 "get_zone_info": false, 00:09:46.033 "zone_management": false, 00:09:46.033 "zone_append": false, 00:09:46.033 "compare": false, 00:09:46.033 "compare_and_write": false, 00:09:46.033 "abort": true, 00:09:46.033 "seek_hole": false, 00:09:46.033 "seek_data": false, 00:09:46.033 "copy": true, 00:09:46.033 "nvme_iov_md": false 00:09:46.033 }, 00:09:46.033 "memory_domains": [ 00:09:46.033 { 00:09:46.033 "dma_device_id": "system", 00:09:46.033 "dma_device_type": 1 00:09:46.033 }, 00:09:46.033 { 00:09:46.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.033 "dma_device_type": 2 00:09:46.033 } 00:09:46.033 ], 00:09:46.033 "driver_specific": {} 00:09:46.033 } 00:09:46.033 ] 00:09:46.033 10:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:46.033 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:46.033 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:46.033 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:46.033 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:46.033 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:46.033 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:46.033 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:46.033 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:46.033 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:46.033 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:46.033 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.033 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.291 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:46.291 "name": "Existed_Raid", 00:09:46.291 "uuid": "20cea243-8340-4f59-b8f6-a46b329002b0", 00:09:46.291 "strip_size_kb": 64, 00:09:46.291 "state": "configuring", 00:09:46.291 "raid_level": "raid0", 00:09:46.291 "superblock": true, 00:09:46.291 "num_base_bdevs": 2, 00:09:46.291 "num_base_bdevs_discovered": 1, 00:09:46.291 "num_base_bdevs_operational": 2, 00:09:46.291 "base_bdevs_list": [ 00:09:46.291 { 00:09:46.291 "name": "BaseBdev1", 00:09:46.291 "uuid": "ab1d3293-318d-46f3-ab8f-9d31914e4fed", 00:09:46.291 "is_configured": true, 00:09:46.291 "data_offset": 2048, 00:09:46.291 "data_size": 63488 00:09:46.291 }, 00:09:46.291 { 00:09:46.291 "name": "BaseBdev2", 00:09:46.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.291 "is_configured": false, 00:09:46.291 "data_offset": 0, 00:09:46.291 "data_size": 0 00:09:46.291 } 00:09:46.291 ] 00:09:46.291 }' 00:09:46.291 10:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:46.291 10:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.855 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:47.112 [2024-07-25 10:24:50.576477] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.112 [2024-07-25 10:24:50.576531] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17ede50 name Existed_Raid, state configuring 00:09:47.112 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:47.370 [2024-07-25 10:24:50.825156] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.370 [2024-07-25 10:24:50.826680] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.370 [2024-07-25 10:24:50.826722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.370 10:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.628 10:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:47.628 "name": "Existed_Raid", 00:09:47.628 "uuid": "4ff91ea5-7283-4f9e-890a-088553fa3a84", 00:09:47.628 "strip_size_kb": 64, 00:09:47.628 "state": "configuring", 00:09:47.628 "raid_level": "raid0", 00:09:47.628 "superblock": true, 00:09:47.628 "num_base_bdevs": 2, 00:09:47.628 "num_base_bdevs_discovered": 1, 00:09:47.628 "num_base_bdevs_operational": 2, 00:09:47.628 "base_bdevs_list": [ 00:09:47.628 { 00:09:47.628 "name": "BaseBdev1", 00:09:47.628 "uuid": "ab1d3293-318d-46f3-ab8f-9d31914e4fed", 00:09:47.628 "is_configured": true, 00:09:47.628 "data_offset": 2048, 00:09:47.628 "data_size": 63488 00:09:47.628 }, 00:09:47.628 { 00:09:47.628 "name": "BaseBdev2", 00:09:47.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.628 "is_configured": false, 00:09:47.628 "data_offset": 0, 00:09:47.628 "data_size": 0 00:09:47.628 } 00:09:47.628 ] 00:09:47.628 }' 00:09:47.628 10:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:47.628 10:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.193 10:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:48.193 [2024-07-25 10:24:51.898592] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.193 [2024-07-25 10:24:51.898815] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x17eeb40 00:09:48.193 [2024-07-25 10:24:51.898846] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:48.193 [2024-07-25 10:24:51.899012] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17ef8e0 00:09:48.193 [2024-07-25 10:24:51.899172] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17eeb40 00:09:48.193 [2024-07-25 10:24:51.899187] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17eeb40 00:09:48.193 [2024-07-25 10:24:51.899279] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.193 BaseBdev2 00:09:48.451 10:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:48.451 10:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:48.451 10:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.451 10:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:48.451 10:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.451 10:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.451 10:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:48.451 10:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:48.708 [ 00:09:48.708 { 00:09:48.708 "name": "BaseBdev2", 00:09:48.708 "aliases": [ 00:09:48.708 "0e41f533-9680-4938-b255-30e2efa92b86" 00:09:48.708 ], 00:09:48.708 "product_name": "Malloc disk", 00:09:48.708 "block_size": 512, 00:09:48.708 "num_blocks": 65536, 00:09:48.708 "uuid": "0e41f533-9680-4938-b255-30e2efa92b86", 00:09:48.708 "assigned_rate_limits": { 00:09:48.708 "rw_ios_per_sec": 0, 00:09:48.708 "rw_mbytes_per_sec": 0, 00:09:48.708 "r_mbytes_per_sec": 0, 00:09:48.708 "w_mbytes_per_sec": 0 00:09:48.708 }, 00:09:48.708 "claimed": true, 00:09:48.708 "claim_type": "exclusive_write", 00:09:48.708 "zoned": false, 00:09:48.708 "supported_io_types": { 00:09:48.708 "read": true, 00:09:48.708 "write": true, 00:09:48.708 "unmap": true, 00:09:48.708 "flush": true, 00:09:48.708 "reset": true, 00:09:48.708 "nvme_admin": false, 00:09:48.708 "nvme_io": false, 00:09:48.708 "nvme_io_md": false, 00:09:48.708 "write_zeroes": true, 00:09:48.708 "zcopy": true, 00:09:48.708 "get_zone_info": false, 00:09:48.708 "zone_management": false, 00:09:48.708 "zone_append": false, 00:09:48.708 "compare": false, 00:09:48.708 "compare_and_write": false, 00:09:48.708 "abort": true, 00:09:48.708 "seek_hole": false, 00:09:48.708 "seek_data": false, 00:09:48.708 "copy": true, 00:09:48.708 "nvme_iov_md": false 00:09:48.708 }, 00:09:48.708 "memory_domains": [ 00:09:48.708 { 00:09:48.708 "dma_device_id": "system", 00:09:48.708 "dma_device_type": 1 00:09:48.708 }, 00:09:48.708 { 00:09:48.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.708 "dma_device_type": 2 00:09:48.708 } 00:09:48.708 ], 00:09:48.708 "driver_specific": {} 00:09:48.708 } 00:09:48.708 ] 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.708 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.966 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:48.966 "name": "Existed_Raid", 00:09:48.966 "uuid": "4ff91ea5-7283-4f9e-890a-088553fa3a84", 00:09:48.966 "strip_size_kb": 64, 00:09:48.966 "state": "online", 00:09:48.966 "raid_level": "raid0", 00:09:48.966 "superblock": true, 00:09:48.966 "num_base_bdevs": 2, 00:09:48.966 "num_base_bdevs_discovered": 2, 00:09:48.966 "num_base_bdevs_operational": 2, 00:09:48.966 "base_bdevs_list": [ 00:09:48.966 { 00:09:48.966 "name": "BaseBdev1", 00:09:48.966 "uuid": "ab1d3293-318d-46f3-ab8f-9d31914e4fed", 00:09:48.966 "is_configured": true, 00:09:48.966 "data_offset": 2048, 00:09:48.966 "data_size": 63488 00:09:48.966 }, 00:09:48.966 { 00:09:48.966 "name": "BaseBdev2", 00:09:48.966 "uuid": "0e41f533-9680-4938-b255-30e2efa92b86", 00:09:48.966 "is_configured": true, 00:09:48.966 "data_offset": 2048, 00:09:48.966 "data_size": 63488 00:09:48.966 } 00:09:48.966 ] 00:09:48.966 }' 00:09:48.966 10:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:48.966 10:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.530 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:49.530 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:49.530 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:49.530 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:49.530 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:49.530 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:49.531 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:49.531 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:49.788 [2024-07-25 10:24:53.426876] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.788 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:49.788 "name": "Existed_Raid", 00:09:49.788 "aliases": [ 00:09:49.788 "4ff91ea5-7283-4f9e-890a-088553fa3a84" 00:09:49.788 ], 00:09:49.788 "product_name": "Raid Volume", 00:09:49.788 "block_size": 512, 00:09:49.788 "num_blocks": 126976, 00:09:49.788 "uuid": "4ff91ea5-7283-4f9e-890a-088553fa3a84", 00:09:49.788 "assigned_rate_limits": { 00:09:49.788 "rw_ios_per_sec": 0, 00:09:49.788 "rw_mbytes_per_sec": 0, 00:09:49.788 "r_mbytes_per_sec": 0, 00:09:49.788 "w_mbytes_per_sec": 0 00:09:49.788 }, 00:09:49.788 "claimed": false, 00:09:49.788 "zoned": false, 00:09:49.788 "supported_io_types": { 00:09:49.788 "read": true, 00:09:49.788 "write": true, 00:09:49.788 "unmap": true, 00:09:49.788 "flush": true, 00:09:49.788 "reset": true, 00:09:49.788 "nvme_admin": false, 00:09:49.788 "nvme_io": false, 00:09:49.788 "nvme_io_md": false, 00:09:49.788 "write_zeroes": true, 00:09:49.788 "zcopy": false, 00:09:49.788 "get_zone_info": false, 00:09:49.788 "zone_management": false, 00:09:49.788 "zone_append": false, 00:09:49.788 "compare": false, 00:09:49.788 "compare_and_write": false, 00:09:49.788 "abort": false, 00:09:49.788 "seek_hole": false, 00:09:49.788 "seek_data": false, 00:09:49.788 "copy": false, 00:09:49.788 "nvme_iov_md": false 00:09:49.788 }, 00:09:49.788 "memory_domains": [ 00:09:49.788 { 00:09:49.788 "dma_device_id": "system", 00:09:49.788 "dma_device_type": 1 00:09:49.788 }, 00:09:49.788 { 00:09:49.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.788 "dma_device_type": 2 00:09:49.788 }, 00:09:49.788 { 00:09:49.788 "dma_device_id": "system", 00:09:49.788 "dma_device_type": 1 00:09:49.788 }, 00:09:49.788 { 00:09:49.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.788 "dma_device_type": 2 00:09:49.788 } 00:09:49.788 ], 00:09:49.788 "driver_specific": { 00:09:49.788 "raid": { 00:09:49.788 "uuid": "4ff91ea5-7283-4f9e-890a-088553fa3a84", 00:09:49.788 "strip_size_kb": 64, 00:09:49.788 "state": "online", 00:09:49.788 "raid_level": "raid0", 00:09:49.788 "superblock": true, 00:09:49.788 "num_base_bdevs": 2, 00:09:49.788 "num_base_bdevs_discovered": 2, 00:09:49.788 "num_base_bdevs_operational": 2, 00:09:49.788 "base_bdevs_list": [ 00:09:49.788 { 00:09:49.788 "name": "BaseBdev1", 00:09:49.788 "uuid": "ab1d3293-318d-46f3-ab8f-9d31914e4fed", 00:09:49.788 "is_configured": true, 00:09:49.788 "data_offset": 2048, 00:09:49.788 "data_size": 63488 00:09:49.788 }, 00:09:49.788 { 00:09:49.788 "name": "BaseBdev2", 00:09:49.788 "uuid": "0e41f533-9680-4938-b255-30e2efa92b86", 00:09:49.788 "is_configured": true, 00:09:49.788 "data_offset": 2048, 00:09:49.788 "data_size": 63488 00:09:49.788 } 00:09:49.788 ] 00:09:49.788 } 00:09:49.788 } 00:09:49.788 }' 00:09:49.788 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.788 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:49.788 BaseBdev2' 00:09:49.788 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:49.788 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:49.788 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:50.045 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:50.045 "name": "BaseBdev1", 00:09:50.045 "aliases": [ 00:09:50.045 "ab1d3293-318d-46f3-ab8f-9d31914e4fed" 00:09:50.045 ], 00:09:50.045 "product_name": "Malloc disk", 00:09:50.045 "block_size": 512, 00:09:50.045 "num_blocks": 65536, 00:09:50.045 "uuid": "ab1d3293-318d-46f3-ab8f-9d31914e4fed", 00:09:50.045 "assigned_rate_limits": { 00:09:50.045 "rw_ios_per_sec": 0, 00:09:50.045 "rw_mbytes_per_sec": 0, 00:09:50.045 "r_mbytes_per_sec": 0, 00:09:50.045 "w_mbytes_per_sec": 0 00:09:50.045 }, 00:09:50.045 "claimed": true, 00:09:50.045 "claim_type": "exclusive_write", 00:09:50.045 "zoned": false, 00:09:50.045 "supported_io_types": { 00:09:50.045 "read": true, 00:09:50.045 "write": true, 00:09:50.045 "unmap": true, 00:09:50.045 "flush": true, 00:09:50.045 "reset": true, 00:09:50.045 "nvme_admin": false, 00:09:50.045 "nvme_io": false, 00:09:50.046 "nvme_io_md": false, 00:09:50.046 "write_zeroes": true, 00:09:50.046 "zcopy": true, 00:09:50.046 "get_zone_info": false, 00:09:50.046 "zone_management": false, 00:09:50.046 "zone_append": false, 00:09:50.046 "compare": false, 00:09:50.046 "compare_and_write": false, 00:09:50.046 "abort": true, 00:09:50.046 "seek_hole": false, 00:09:50.046 "seek_data": false, 00:09:50.046 "copy": true, 00:09:50.046 "nvme_iov_md": false 00:09:50.046 }, 00:09:50.046 "memory_domains": [ 00:09:50.046 { 00:09:50.046 "dma_device_id": "system", 00:09:50.046 "dma_device_type": 1 00:09:50.046 }, 00:09:50.046 { 00:09:50.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.046 "dma_device_type": 2 00:09:50.046 } 00:09:50.046 ], 00:09:50.046 "driver_specific": {} 00:09:50.046 }' 00:09:50.046 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.303 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.303 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:50.303 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.303 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.303 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:50.303 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.303 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.303 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:50.303 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:50.303 10:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:50.560 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:50.560 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:50.560 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:50.560 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:50.818 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:50.818 "name": "BaseBdev2", 00:09:50.818 "aliases": [ 00:09:50.818 "0e41f533-9680-4938-b255-30e2efa92b86" 00:09:50.818 ], 00:09:50.818 "product_name": "Malloc disk", 00:09:50.818 "block_size": 512, 00:09:50.818 "num_blocks": 65536, 00:09:50.818 "uuid": "0e41f533-9680-4938-b255-30e2efa92b86", 00:09:50.818 "assigned_rate_limits": { 00:09:50.818 "rw_ios_per_sec": 0, 00:09:50.818 "rw_mbytes_per_sec": 0, 00:09:50.818 "r_mbytes_per_sec": 0, 00:09:50.818 "w_mbytes_per_sec": 0 00:09:50.818 }, 00:09:50.818 "claimed": true, 00:09:50.818 "claim_type": "exclusive_write", 00:09:50.818 "zoned": false, 00:09:50.818 "supported_io_types": { 00:09:50.818 "read": true, 00:09:50.818 "write": true, 00:09:50.818 "unmap": true, 00:09:50.818 "flush": true, 00:09:50.818 "reset": true, 00:09:50.818 "nvme_admin": false, 00:09:50.818 "nvme_io": false, 00:09:50.818 "nvme_io_md": false, 00:09:50.818 "write_zeroes": true, 00:09:50.818 "zcopy": true, 00:09:50.818 "get_zone_info": false, 00:09:50.818 "zone_management": false, 00:09:50.818 "zone_append": false, 00:09:50.818 "compare": false, 00:09:50.818 "compare_and_write": false, 00:09:50.818 "abort": true, 00:09:50.818 "seek_hole": false, 00:09:50.818 "seek_data": false, 00:09:50.818 "copy": true, 00:09:50.818 "nvme_iov_md": false 00:09:50.818 }, 00:09:50.818 "memory_domains": [ 00:09:50.818 { 00:09:50.818 "dma_device_id": "system", 00:09:50.818 "dma_device_type": 1 00:09:50.818 }, 00:09:50.818 { 00:09:50.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.818 "dma_device_type": 2 00:09:50.818 } 00:09:50.818 ], 00:09:50.818 "driver_specific": {} 00:09:50.818 }' 00:09:50.818 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.818 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.818 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:50.818 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.818 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.818 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:50.818 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.818 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.818 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:50.818 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:51.076 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:51.076 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:51.076 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:51.335 [2024-07-25 10:24:54.802615] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.335 [2024-07-25 10:24:54.802645] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.335 [2024-07-25 10:24:54.802688] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.335 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.593 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:51.593 "name": "Existed_Raid", 00:09:51.593 "uuid": "4ff91ea5-7283-4f9e-890a-088553fa3a84", 00:09:51.593 "strip_size_kb": 64, 00:09:51.593 "state": "offline", 00:09:51.593 "raid_level": "raid0", 00:09:51.593 "superblock": true, 00:09:51.593 "num_base_bdevs": 2, 00:09:51.593 "num_base_bdevs_discovered": 1, 00:09:51.593 "num_base_bdevs_operational": 1, 00:09:51.593 "base_bdevs_list": [ 00:09:51.593 { 00:09:51.593 "name": null, 00:09:51.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.593 "is_configured": false, 00:09:51.593 "data_offset": 2048, 00:09:51.593 "data_size": 63488 00:09:51.593 }, 00:09:51.593 { 00:09:51.593 "name": "BaseBdev2", 00:09:51.593 "uuid": "0e41f533-9680-4938-b255-30e2efa92b86", 00:09:51.593 "is_configured": true, 00:09:51.593 "data_offset": 2048, 00:09:51.593 "data_size": 63488 00:09:51.593 } 00:09:51.593 ] 00:09:51.593 }' 00:09:51.593 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:51.593 10:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.158 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:52.158 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:52.158 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.159 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:52.416 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:52.417 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:52.417 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:52.417 [2024-07-25 10:24:56.095947] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:52.417 [2024-07-25 10:24:56.096008] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17eeb40 name Existed_Raid, state offline 00:09:52.675 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:52.675 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:52.675 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.675 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:52.675 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:52.675 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:52.675 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:52.675 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2333537 00:09:52.675 10:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2333537 ']' 00:09:52.675 10:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2333537 00:09:52.675 10:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:52.675 10:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.675 10:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2333537 00:09:52.933 10:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:52.933 10:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:52.933 10:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2333537' 00:09:52.933 killing process with pid 2333537 00:09:52.933 10:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2333537 00:09:52.933 [2024-07-25 10:24:56.400282] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.933 10:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2333537 00:09:52.933 [2024-07-25 10:24:56.401506] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.192 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:53.192 00:09:53.192 real 0m10.372s 00:09:53.192 user 0m18.708s 00:09:53.192 sys 0m1.503s 00:09:53.192 10:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.192 10:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.192 ************************************ 00:09:53.192 END TEST raid_state_function_test_sb 00:09:53.192 ************************************ 00:09:53.192 10:24:56 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:53.192 10:24:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:53.192 10:24:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.192 10:24:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.192 ************************************ 00:09:53.192 START TEST raid_superblock_test 00:09:53.192 ************************************ 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2334983 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2334983 /var/tmp/spdk-raid.sock 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2334983 ']' 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:53.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.192 10:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.192 [2024-07-25 10:24:56.782328] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:53.192 [2024-07-25 10:24:56.782395] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2334983 ] 00:09:53.192 [2024-07-25 10:24:56.865454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.451 [2024-07-25 10:24:56.988091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.451 [2024-07-25 10:24:57.067448] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.451 [2024-07-25 10:24:57.067490] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.451 10:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.451 10:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:53.451 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:09:53.451 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:53.451 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:09:53.451 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:09:53.451 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:53.451 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:53.451 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:53.451 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:53.451 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:53.710 malloc1 00:09:53.710 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:53.968 [2024-07-25 10:24:57.606571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:53.968 [2024-07-25 10:24:57.606626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.968 [2024-07-25 10:24:57.606654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22bc2b0 00:09:53.968 [2024-07-25 10:24:57.606671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.968 [2024-07-25 10:24:57.608329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.968 [2024-07-25 10:24:57.608358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:53.968 pt1 00:09:53.968 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:53.968 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:53.968 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:09:53.968 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:09:53.968 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:53.968 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:53.968 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:53.968 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:53.968 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:54.226 malloc2 00:09:54.226 10:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.484 [2024-07-25 10:24:58.119108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.484 [2024-07-25 10:24:58.119165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.484 [2024-07-25 10:24:58.119187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x246f1e0 00:09:54.484 [2024-07-25 10:24:58.119201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.484 [2024-07-25 10:24:58.120699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.484 [2024-07-25 10:24:58.120723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.484 pt2 00:09:54.484 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:54.484 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:54.484 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:09:54.742 [2024-07-25 10:24:58.367810] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:54.742 [2024-07-25 10:24:58.369109] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.742 [2024-07-25 10:24:58.369270] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2453df0 00:09:54.742 [2024-07-25 10:24:58.369285] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:54.742 [2024-07-25 10:24:58.369524] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24549b0 00:09:54.742 [2024-07-25 10:24:58.369680] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2453df0 00:09:54.742 [2024-07-25 10:24:58.369694] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2453df0 00:09:54.742 [2024-07-25 10:24:58.369828] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.742 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:54.742 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:54.742 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:54.742 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:54.742 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:54.742 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:54.742 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:54.743 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:54.743 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:54.743 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:54.743 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.743 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.000 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:55.000 "name": "raid_bdev1", 00:09:55.000 "uuid": "9bca38f2-b022-4064-a0fe-576a7f7dde96", 00:09:55.000 "strip_size_kb": 64, 00:09:55.000 "state": "online", 00:09:55.000 "raid_level": "raid0", 00:09:55.000 "superblock": true, 00:09:55.000 "num_base_bdevs": 2, 00:09:55.001 "num_base_bdevs_discovered": 2, 00:09:55.001 "num_base_bdevs_operational": 2, 00:09:55.001 "base_bdevs_list": [ 00:09:55.001 { 00:09:55.001 "name": "pt1", 00:09:55.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.001 "is_configured": true, 00:09:55.001 "data_offset": 2048, 00:09:55.001 "data_size": 63488 00:09:55.001 }, 00:09:55.001 { 00:09:55.001 "name": "pt2", 00:09:55.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.001 "is_configured": true, 00:09:55.001 "data_offset": 2048, 00:09:55.001 "data_size": 63488 00:09:55.001 } 00:09:55.001 ] 00:09:55.001 }' 00:09:55.001 10:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:55.001 10:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.566 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:09:55.566 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:55.566 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:55.566 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:55.566 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:55.566 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:55.566 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:55.566 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:55.825 [2024-07-25 10:24:59.414758] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.825 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:55.825 "name": "raid_bdev1", 00:09:55.825 "aliases": [ 00:09:55.825 "9bca38f2-b022-4064-a0fe-576a7f7dde96" 00:09:55.825 ], 00:09:55.825 "product_name": "Raid Volume", 00:09:55.825 "block_size": 512, 00:09:55.825 "num_blocks": 126976, 00:09:55.825 "uuid": "9bca38f2-b022-4064-a0fe-576a7f7dde96", 00:09:55.825 "assigned_rate_limits": { 00:09:55.825 "rw_ios_per_sec": 0, 00:09:55.825 "rw_mbytes_per_sec": 0, 00:09:55.825 "r_mbytes_per_sec": 0, 00:09:55.825 "w_mbytes_per_sec": 0 00:09:55.825 }, 00:09:55.825 "claimed": false, 00:09:55.825 "zoned": false, 00:09:55.825 "supported_io_types": { 00:09:55.825 "read": true, 00:09:55.825 "write": true, 00:09:55.825 "unmap": true, 00:09:55.825 "flush": true, 00:09:55.825 "reset": true, 00:09:55.825 "nvme_admin": false, 00:09:55.825 "nvme_io": false, 00:09:55.825 "nvme_io_md": false, 00:09:55.825 "write_zeroes": true, 00:09:55.825 "zcopy": false, 00:09:55.825 "get_zone_info": false, 00:09:55.825 "zone_management": false, 00:09:55.825 "zone_append": false, 00:09:55.825 "compare": false, 00:09:55.825 "compare_and_write": false, 00:09:55.825 "abort": false, 00:09:55.825 "seek_hole": false, 00:09:55.825 "seek_data": false, 00:09:55.825 "copy": false, 00:09:55.825 "nvme_iov_md": false 00:09:55.825 }, 00:09:55.825 "memory_domains": [ 00:09:55.825 { 00:09:55.825 "dma_device_id": "system", 00:09:55.825 "dma_device_type": 1 00:09:55.825 }, 00:09:55.825 { 00:09:55.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.825 "dma_device_type": 2 00:09:55.825 }, 00:09:55.825 { 00:09:55.825 "dma_device_id": "system", 00:09:55.825 "dma_device_type": 1 00:09:55.825 }, 00:09:55.825 { 00:09:55.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.825 "dma_device_type": 2 00:09:55.825 } 00:09:55.825 ], 00:09:55.825 "driver_specific": { 00:09:55.825 "raid": { 00:09:55.825 "uuid": "9bca38f2-b022-4064-a0fe-576a7f7dde96", 00:09:55.825 "strip_size_kb": 64, 00:09:55.825 "state": "online", 00:09:55.825 "raid_level": "raid0", 00:09:55.825 "superblock": true, 00:09:55.825 "num_base_bdevs": 2, 00:09:55.825 "num_base_bdevs_discovered": 2, 00:09:55.825 "num_base_bdevs_operational": 2, 00:09:55.825 "base_bdevs_list": [ 00:09:55.825 { 00:09:55.825 "name": "pt1", 00:09:55.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.825 "is_configured": true, 00:09:55.825 "data_offset": 2048, 00:09:55.825 "data_size": 63488 00:09:55.825 }, 00:09:55.825 { 00:09:55.825 "name": "pt2", 00:09:55.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.825 "is_configured": true, 00:09:55.825 "data_offset": 2048, 00:09:55.825 "data_size": 63488 00:09:55.825 } 00:09:55.825 ] 00:09:55.825 } 00:09:55.825 } 00:09:55.825 }' 00:09:55.825 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.825 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:55.825 pt2' 00:09:55.825 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:55.825 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:55.825 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:56.084 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:56.084 "name": "pt1", 00:09:56.084 "aliases": [ 00:09:56.084 "00000000-0000-0000-0000-000000000001" 00:09:56.084 ], 00:09:56.084 "product_name": "passthru", 00:09:56.084 "block_size": 512, 00:09:56.084 "num_blocks": 65536, 00:09:56.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.084 "assigned_rate_limits": { 00:09:56.084 "rw_ios_per_sec": 0, 00:09:56.084 "rw_mbytes_per_sec": 0, 00:09:56.084 "r_mbytes_per_sec": 0, 00:09:56.084 "w_mbytes_per_sec": 0 00:09:56.084 }, 00:09:56.084 "claimed": true, 00:09:56.084 "claim_type": "exclusive_write", 00:09:56.084 "zoned": false, 00:09:56.084 "supported_io_types": { 00:09:56.084 "read": true, 00:09:56.084 "write": true, 00:09:56.084 "unmap": true, 00:09:56.084 "flush": true, 00:09:56.084 "reset": true, 00:09:56.084 "nvme_admin": false, 00:09:56.084 "nvme_io": false, 00:09:56.084 "nvme_io_md": false, 00:09:56.084 "write_zeroes": true, 00:09:56.084 "zcopy": true, 00:09:56.084 "get_zone_info": false, 00:09:56.084 "zone_management": false, 00:09:56.084 "zone_append": false, 00:09:56.084 "compare": false, 00:09:56.084 "compare_and_write": false, 00:09:56.084 "abort": true, 00:09:56.084 "seek_hole": false, 00:09:56.084 "seek_data": false, 00:09:56.084 "copy": true, 00:09:56.084 "nvme_iov_md": false 00:09:56.084 }, 00:09:56.084 "memory_domains": [ 00:09:56.084 { 00:09:56.084 "dma_device_id": "system", 00:09:56.084 "dma_device_type": 1 00:09:56.084 }, 00:09:56.084 { 00:09:56.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.084 "dma_device_type": 2 00:09:56.084 } 00:09:56.084 ], 00:09:56.084 "driver_specific": { 00:09:56.084 "passthru": { 00:09:56.084 "name": "pt1", 00:09:56.084 "base_bdev_name": "malloc1" 00:09:56.084 } 00:09:56.084 } 00:09:56.084 }' 00:09:56.084 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:56.084 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:56.084 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:56.084 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:56.342 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:56.342 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:56.342 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:56.342 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:56.342 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:56.342 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:56.342 10:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:56.342 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:56.342 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:56.342 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:56.342 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:56.601 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:56.601 "name": "pt2", 00:09:56.601 "aliases": [ 00:09:56.601 "00000000-0000-0000-0000-000000000002" 00:09:56.601 ], 00:09:56.601 "product_name": "passthru", 00:09:56.601 "block_size": 512, 00:09:56.601 "num_blocks": 65536, 00:09:56.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.601 "assigned_rate_limits": { 00:09:56.601 "rw_ios_per_sec": 0, 00:09:56.601 "rw_mbytes_per_sec": 0, 00:09:56.601 "r_mbytes_per_sec": 0, 00:09:56.601 "w_mbytes_per_sec": 0 00:09:56.601 }, 00:09:56.601 "claimed": true, 00:09:56.601 "claim_type": "exclusive_write", 00:09:56.601 "zoned": false, 00:09:56.601 "supported_io_types": { 00:09:56.601 "read": true, 00:09:56.601 "write": true, 00:09:56.601 "unmap": true, 00:09:56.601 "flush": true, 00:09:56.601 "reset": true, 00:09:56.601 "nvme_admin": false, 00:09:56.601 "nvme_io": false, 00:09:56.601 "nvme_io_md": false, 00:09:56.601 "write_zeroes": true, 00:09:56.601 "zcopy": true, 00:09:56.601 "get_zone_info": false, 00:09:56.601 "zone_management": false, 00:09:56.601 "zone_append": false, 00:09:56.601 "compare": false, 00:09:56.601 "compare_and_write": false, 00:09:56.601 "abort": true, 00:09:56.601 "seek_hole": false, 00:09:56.601 "seek_data": false, 00:09:56.601 "copy": true, 00:09:56.601 "nvme_iov_md": false 00:09:56.601 }, 00:09:56.601 "memory_domains": [ 00:09:56.601 { 00:09:56.601 "dma_device_id": "system", 00:09:56.601 "dma_device_type": 1 00:09:56.601 }, 00:09:56.601 { 00:09:56.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.601 "dma_device_type": 2 00:09:56.601 } 00:09:56.601 ], 00:09:56.601 "driver_specific": { 00:09:56.601 "passthru": { 00:09:56.601 "name": "pt2", 00:09:56.601 "base_bdev_name": "malloc2" 00:09:56.601 } 00:09:56.601 } 00:09:56.601 }' 00:09:56.601 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:56.601 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:56.859 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:56.859 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:56.859 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:56.859 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:56.859 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:56.859 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:56.859 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:56.859 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:56.859 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:56.859 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:56.859 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:56.859 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:09:57.117 [2024-07-25 10:25:00.762468] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.118 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=9bca38f2-b022-4064-a0fe-576a7f7dde96 00:09:57.118 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 9bca38f2-b022-4064-a0fe-576a7f7dde96 ']' 00:09:57.118 10:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:57.375 [2024-07-25 10:25:01.046938] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.375 [2024-07-25 10:25:01.046964] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.375 [2024-07-25 10:25:01.047051] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.375 [2024-07-25 10:25:01.047132] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.375 [2024-07-25 10:25:01.047147] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2453df0 name raid_bdev1, state offline 00:09:57.375 10:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:57.375 10:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:09:57.633 10:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:09:57.633 10:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:09:57.633 10:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:57.633 10:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:57.891 10:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:57.891 10:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:58.149 10:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:58.149 10:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:58.408 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:09:58.408 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:58.408 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:58.408 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:58.408 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:58.408 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.408 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:58.408 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.408 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:58.408 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.408 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:58.408 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:09:58.408 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:58.666 [2024-07-25 10:25:02.286225] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:58.666 [2024-07-25 10:25:02.287576] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:58.666 [2024-07-25 10:25:02.287638] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:58.666 [2024-07-25 10:25:02.287687] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:58.666 [2024-07-25 10:25:02.287709] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.666 [2024-07-25 10:25:02.287718] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x245f7e0 name raid_bdev1, state configuring 00:09:58.666 request: 00:09:58.666 { 00:09:58.666 "name": "raid_bdev1", 00:09:58.666 "raid_level": "raid0", 00:09:58.666 "base_bdevs": [ 00:09:58.666 "malloc1", 00:09:58.666 "malloc2" 00:09:58.666 ], 00:09:58.666 "strip_size_kb": 64, 00:09:58.666 "superblock": false, 00:09:58.666 "method": "bdev_raid_create", 00:09:58.666 "req_id": 1 00:09:58.666 } 00:09:58.666 Got JSON-RPC error response 00:09:58.666 response: 00:09:58.666 { 00:09:58.666 "code": -17, 00:09:58.666 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:58.666 } 00:09:58.666 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:58.666 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:58.666 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:58.666 10:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:58.666 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.666 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:09:58.925 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:09:58.925 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:09:58.925 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:59.183 [2024-07-25 10:25:02.763411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:59.184 [2024-07-25 10:25:02.763481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.184 [2024-07-25 10:25:02.763516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24547b0 00:09:59.184 [2024-07-25 10:25:02.763532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.184 [2024-07-25 10:25:02.765240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.184 [2024-07-25 10:25:02.765264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:59.184 [2024-07-25 10:25:02.765351] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:59.184 [2024-07-25 10:25:02.765397] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:59.184 pt1 00:09:59.184 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:59.184 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:59.184 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:59.184 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:59.184 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:59.184 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:59.184 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:59.184 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:59.184 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:59.184 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:59.184 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:59.184 10:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.476 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:59.476 "name": "raid_bdev1", 00:09:59.476 "uuid": "9bca38f2-b022-4064-a0fe-576a7f7dde96", 00:09:59.476 "strip_size_kb": 64, 00:09:59.476 "state": "configuring", 00:09:59.476 "raid_level": "raid0", 00:09:59.476 "superblock": true, 00:09:59.476 "num_base_bdevs": 2, 00:09:59.476 "num_base_bdevs_discovered": 1, 00:09:59.476 "num_base_bdevs_operational": 2, 00:09:59.476 "base_bdevs_list": [ 00:09:59.476 { 00:09:59.476 "name": "pt1", 00:09:59.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.476 "is_configured": true, 00:09:59.476 "data_offset": 2048, 00:09:59.476 "data_size": 63488 00:09:59.476 }, 00:09:59.476 { 00:09:59.476 "name": null, 00:09:59.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.476 "is_configured": false, 00:09:59.476 "data_offset": 2048, 00:09:59.476 "data_size": 63488 00:09:59.476 } 00:09:59.476 ] 00:09:59.476 }' 00:09:59.476 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:59.476 10:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.041 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:10:00.041 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:00.041 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:00.041 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:00.299 [2024-07-25 10:25:03.810226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:00.299 [2024-07-25 10:25:03.810293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.299 [2024-07-25 10:25:03.810315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22bc4e0 00:10:00.299 [2024-07-25 10:25:03.810328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.299 [2024-07-25 10:25:03.810703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.299 [2024-07-25 10:25:03.810724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:00.299 [2024-07-25 10:25:03.810798] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:00.299 [2024-07-25 10:25:03.810822] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:00.299 [2024-07-25 10:25:03.810920] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x22bb4a0 00:10:00.299 [2024-07-25 10:25:03.810933] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:00.299 [2024-07-25 10:25:03.811070] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24584e0 00:10:00.299 [2024-07-25 10:25:03.811216] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22bb4a0 00:10:00.299 [2024-07-25 10:25:03.811230] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22bb4a0 00:10:00.299 [2024-07-25 10:25:03.811323] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.299 pt2 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.299 10:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.558 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:00.558 "name": "raid_bdev1", 00:10:00.558 "uuid": "9bca38f2-b022-4064-a0fe-576a7f7dde96", 00:10:00.558 "strip_size_kb": 64, 00:10:00.558 "state": "online", 00:10:00.558 "raid_level": "raid0", 00:10:00.558 "superblock": true, 00:10:00.558 "num_base_bdevs": 2, 00:10:00.558 "num_base_bdevs_discovered": 2, 00:10:00.558 "num_base_bdevs_operational": 2, 00:10:00.558 "base_bdevs_list": [ 00:10:00.558 { 00:10:00.558 "name": "pt1", 00:10:00.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.558 "is_configured": true, 00:10:00.558 "data_offset": 2048, 00:10:00.558 "data_size": 63488 00:10:00.558 }, 00:10:00.558 { 00:10:00.558 "name": "pt2", 00:10:00.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.558 "is_configured": true, 00:10:00.558 "data_offset": 2048, 00:10:00.558 "data_size": 63488 00:10:00.558 } 00:10:00.558 ] 00:10:00.558 }' 00:10:00.558 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:00.558 10:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.122 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:01.122 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:01.122 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:01.122 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:01.122 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:01.122 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:01.122 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:01.122 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:01.380 [2024-07-25 10:25:04.841238] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.380 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:01.380 "name": "raid_bdev1", 00:10:01.380 "aliases": [ 00:10:01.380 "9bca38f2-b022-4064-a0fe-576a7f7dde96" 00:10:01.380 ], 00:10:01.380 "product_name": "Raid Volume", 00:10:01.380 "block_size": 512, 00:10:01.380 "num_blocks": 126976, 00:10:01.380 "uuid": "9bca38f2-b022-4064-a0fe-576a7f7dde96", 00:10:01.380 "assigned_rate_limits": { 00:10:01.380 "rw_ios_per_sec": 0, 00:10:01.380 "rw_mbytes_per_sec": 0, 00:10:01.380 "r_mbytes_per_sec": 0, 00:10:01.380 "w_mbytes_per_sec": 0 00:10:01.380 }, 00:10:01.380 "claimed": false, 00:10:01.380 "zoned": false, 00:10:01.380 "supported_io_types": { 00:10:01.380 "read": true, 00:10:01.380 "write": true, 00:10:01.380 "unmap": true, 00:10:01.380 "flush": true, 00:10:01.380 "reset": true, 00:10:01.380 "nvme_admin": false, 00:10:01.380 "nvme_io": false, 00:10:01.380 "nvme_io_md": false, 00:10:01.380 "write_zeroes": true, 00:10:01.380 "zcopy": false, 00:10:01.380 "get_zone_info": false, 00:10:01.380 "zone_management": false, 00:10:01.380 "zone_append": false, 00:10:01.380 "compare": false, 00:10:01.380 "compare_and_write": false, 00:10:01.380 "abort": false, 00:10:01.380 "seek_hole": false, 00:10:01.380 "seek_data": false, 00:10:01.380 "copy": false, 00:10:01.380 "nvme_iov_md": false 00:10:01.380 }, 00:10:01.380 "memory_domains": [ 00:10:01.380 { 00:10:01.380 "dma_device_id": "system", 00:10:01.380 "dma_device_type": 1 00:10:01.380 }, 00:10:01.380 { 00:10:01.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.380 "dma_device_type": 2 00:10:01.380 }, 00:10:01.380 { 00:10:01.380 "dma_device_id": "system", 00:10:01.380 "dma_device_type": 1 00:10:01.380 }, 00:10:01.380 { 00:10:01.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.380 "dma_device_type": 2 00:10:01.380 } 00:10:01.380 ], 00:10:01.380 "driver_specific": { 00:10:01.380 "raid": { 00:10:01.380 "uuid": "9bca38f2-b022-4064-a0fe-576a7f7dde96", 00:10:01.380 "strip_size_kb": 64, 00:10:01.380 "state": "online", 00:10:01.380 "raid_level": "raid0", 00:10:01.380 "superblock": true, 00:10:01.380 "num_base_bdevs": 2, 00:10:01.380 "num_base_bdevs_discovered": 2, 00:10:01.380 "num_base_bdevs_operational": 2, 00:10:01.380 "base_bdevs_list": [ 00:10:01.380 { 00:10:01.380 "name": "pt1", 00:10:01.380 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.380 "is_configured": true, 00:10:01.380 "data_offset": 2048, 00:10:01.380 "data_size": 63488 00:10:01.380 }, 00:10:01.380 { 00:10:01.380 "name": "pt2", 00:10:01.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.380 "is_configured": true, 00:10:01.380 "data_offset": 2048, 00:10:01.380 "data_size": 63488 00:10:01.380 } 00:10:01.380 ] 00:10:01.380 } 00:10:01.380 } 00:10:01.380 }' 00:10:01.380 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.380 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:01.380 pt2' 00:10:01.380 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:01.380 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:01.380 10:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:01.638 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:01.638 "name": "pt1", 00:10:01.638 "aliases": [ 00:10:01.638 "00000000-0000-0000-0000-000000000001" 00:10:01.638 ], 00:10:01.638 "product_name": "passthru", 00:10:01.638 "block_size": 512, 00:10:01.638 "num_blocks": 65536, 00:10:01.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.638 "assigned_rate_limits": { 00:10:01.638 "rw_ios_per_sec": 0, 00:10:01.638 "rw_mbytes_per_sec": 0, 00:10:01.638 "r_mbytes_per_sec": 0, 00:10:01.638 "w_mbytes_per_sec": 0 00:10:01.638 }, 00:10:01.638 "claimed": true, 00:10:01.638 "claim_type": "exclusive_write", 00:10:01.638 "zoned": false, 00:10:01.638 "supported_io_types": { 00:10:01.638 "read": true, 00:10:01.638 "write": true, 00:10:01.638 "unmap": true, 00:10:01.638 "flush": true, 00:10:01.638 "reset": true, 00:10:01.638 "nvme_admin": false, 00:10:01.638 "nvme_io": false, 00:10:01.638 "nvme_io_md": false, 00:10:01.638 "write_zeroes": true, 00:10:01.638 "zcopy": true, 00:10:01.638 "get_zone_info": false, 00:10:01.638 "zone_management": false, 00:10:01.638 "zone_append": false, 00:10:01.638 "compare": false, 00:10:01.638 "compare_and_write": false, 00:10:01.638 "abort": true, 00:10:01.638 "seek_hole": false, 00:10:01.638 "seek_data": false, 00:10:01.638 "copy": true, 00:10:01.638 "nvme_iov_md": false 00:10:01.638 }, 00:10:01.638 "memory_domains": [ 00:10:01.638 { 00:10:01.638 "dma_device_id": "system", 00:10:01.638 "dma_device_type": 1 00:10:01.638 }, 00:10:01.638 { 00:10:01.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.638 "dma_device_type": 2 00:10:01.638 } 00:10:01.638 ], 00:10:01.638 "driver_specific": { 00:10:01.638 "passthru": { 00:10:01.638 "name": "pt1", 00:10:01.638 "base_bdev_name": "malloc1" 00:10:01.638 } 00:10:01.638 } 00:10:01.638 }' 00:10:01.638 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:01.638 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:01.638 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:01.638 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:01.638 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:01.638 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:01.638 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:01.638 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:01.897 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:01.897 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:01.897 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:01.897 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:01.897 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:01.897 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:01.897 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:02.155 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:02.155 "name": "pt2", 00:10:02.155 "aliases": [ 00:10:02.155 "00000000-0000-0000-0000-000000000002" 00:10:02.155 ], 00:10:02.155 "product_name": "passthru", 00:10:02.155 "block_size": 512, 00:10:02.155 "num_blocks": 65536, 00:10:02.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.155 "assigned_rate_limits": { 00:10:02.155 "rw_ios_per_sec": 0, 00:10:02.155 "rw_mbytes_per_sec": 0, 00:10:02.155 "r_mbytes_per_sec": 0, 00:10:02.155 "w_mbytes_per_sec": 0 00:10:02.155 }, 00:10:02.155 "claimed": true, 00:10:02.155 "claim_type": "exclusive_write", 00:10:02.155 "zoned": false, 00:10:02.155 "supported_io_types": { 00:10:02.155 "read": true, 00:10:02.155 "write": true, 00:10:02.155 "unmap": true, 00:10:02.155 "flush": true, 00:10:02.155 "reset": true, 00:10:02.155 "nvme_admin": false, 00:10:02.155 "nvme_io": false, 00:10:02.155 "nvme_io_md": false, 00:10:02.155 "write_zeroes": true, 00:10:02.155 "zcopy": true, 00:10:02.155 "get_zone_info": false, 00:10:02.155 "zone_management": false, 00:10:02.155 "zone_append": false, 00:10:02.155 "compare": false, 00:10:02.155 "compare_and_write": false, 00:10:02.155 "abort": true, 00:10:02.155 "seek_hole": false, 00:10:02.155 "seek_data": false, 00:10:02.155 "copy": true, 00:10:02.155 "nvme_iov_md": false 00:10:02.155 }, 00:10:02.155 "memory_domains": [ 00:10:02.155 { 00:10:02.155 "dma_device_id": "system", 00:10:02.155 "dma_device_type": 1 00:10:02.155 }, 00:10:02.155 { 00:10:02.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.155 "dma_device_type": 2 00:10:02.155 } 00:10:02.155 ], 00:10:02.155 "driver_specific": { 00:10:02.155 "passthru": { 00:10:02.155 "name": "pt2", 00:10:02.155 "base_bdev_name": "malloc2" 00:10:02.155 } 00:10:02.155 } 00:10:02.155 }' 00:10:02.155 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:02.155 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:02.155 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:02.155 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:02.155 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:02.155 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:02.155 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:02.413 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:02.413 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:02.413 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:02.413 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:02.413 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:02.413 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:02.413 10:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:02.733 [2024-07-25 10:25:06.220939] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 9bca38f2-b022-4064-a0fe-576a7f7dde96 '!=' 9bca38f2-b022-4064-a0fe-576a7f7dde96 ']' 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2334983 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2334983 ']' 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2334983 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2334983 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2334983' 00:10:02.733 killing process with pid 2334983 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2334983 00:10:02.733 [2024-07-25 10:25:06.267875] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.733 10:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2334983 00:10:02.733 [2024-07-25 10:25:06.267951] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.733 [2024-07-25 10:25:06.268015] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.733 [2024-07-25 10:25:06.268031] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22bb4a0 name raid_bdev1, state offline 00:10:02.733 [2024-07-25 10:25:06.287913] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.992 10:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:02.992 00:10:02.992 real 0m9.825s 00:10:02.992 user 0m18.131s 00:10:02.992 sys 0m1.426s 00:10:02.992 10:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.992 10:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.992 ************************************ 00:10:02.992 END TEST raid_superblock_test 00:10:02.992 ************************************ 00:10:02.992 10:25:06 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:02.992 10:25:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:02.992 10:25:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.992 10:25:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.992 ************************************ 00:10:02.992 START TEST raid_read_error_test 00:10:02.992 ************************************ 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.4CNv8bwc3d 00:10:02.992 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2336391 00:10:02.993 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:02.993 10:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2336391 /var/tmp/spdk-raid.sock 00:10:02.993 10:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2336391 ']' 00:10:02.993 10:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:02.993 10:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.993 10:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:02.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:02.993 10:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.993 10:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.993 [2024-07-25 10:25:06.666831] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:10:02.993 [2024-07-25 10:25:06.666923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2336391 ] 00:10:03.251 [2024-07-25 10:25:06.750506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.251 [2024-07-25 10:25:06.868565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.251 [2024-07-25 10:25:06.947672] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.251 [2024-07-25 10:25:06.947713] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.184 10:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.184 10:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:04.184 10:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:04.184 10:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:04.184 BaseBdev1_malloc 00:10:04.184 10:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:04.442 true 00:10:04.442 10:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:04.699 [2024-07-25 10:25:08.314675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:04.699 [2024-07-25 10:25:08.314732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.699 [2024-07-25 10:25:08.314756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2874250 00:10:04.699 [2024-07-25 10:25:08.314772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.699 [2024-07-25 10:25:08.316351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.699 [2024-07-25 10:25:08.316376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:04.699 BaseBdev1 00:10:04.700 10:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:04.700 10:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:04.957 BaseBdev2_malloc 00:10:04.957 10:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:05.215 true 00:10:05.215 10:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:05.473 [2024-07-25 10:25:09.056035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:05.473 [2024-07-25 10:25:09.056110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.473 [2024-07-25 10:25:09.056137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2863650 00:10:05.473 [2024-07-25 10:25:09.056152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.473 [2024-07-25 10:25:09.057619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.473 [2024-07-25 10:25:09.057647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:05.473 BaseBdev2 00:10:05.473 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:05.731 [2024-07-25 10:25:09.312746] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.731 [2024-07-25 10:25:09.313933] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.731 [2024-07-25 10:25:09.314143] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x285a7d0 00:10:05.731 [2024-07-25 10:25:09.314162] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:05.731 [2024-07-25 10:25:09.314335] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26b7530 00:10:05.731 [2024-07-25 10:25:09.314515] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x285a7d0 00:10:05.731 [2024-07-25 10:25:09.314531] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x285a7d0 00:10:05.731 [2024-07-25 10:25:09.314646] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.731 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:05.731 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:05.731 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:05.731 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:05.731 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:05.731 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:05.731 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:05.731 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:05.731 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:05.731 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:05.731 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:05.731 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.989 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:05.989 "name": "raid_bdev1", 00:10:05.989 "uuid": "5a6ba1c1-b110-4175-af64-c84ba9ab3b04", 00:10:05.989 "strip_size_kb": 64, 00:10:05.989 "state": "online", 00:10:05.989 "raid_level": "raid0", 00:10:05.989 "superblock": true, 00:10:05.989 "num_base_bdevs": 2, 00:10:05.989 "num_base_bdevs_discovered": 2, 00:10:05.989 "num_base_bdevs_operational": 2, 00:10:05.989 "base_bdevs_list": [ 00:10:05.989 { 00:10:05.989 "name": "BaseBdev1", 00:10:05.989 "uuid": "55edc018-1af6-5142-a4f4-00c27f676d68", 00:10:05.989 "is_configured": true, 00:10:05.989 "data_offset": 2048, 00:10:05.989 "data_size": 63488 00:10:05.989 }, 00:10:05.989 { 00:10:05.989 "name": "BaseBdev2", 00:10:05.989 "uuid": "55e85459-567b-564e-a812-cf08a6148668", 00:10:05.989 "is_configured": true, 00:10:05.989 "data_offset": 2048, 00:10:05.989 "data_size": 63488 00:10:05.989 } 00:10:05.989 ] 00:10:05.989 }' 00:10:05.989 10:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:05.989 10:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.554 10:25:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:06.554 10:25:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:06.554 [2024-07-25 10:25:10.211529] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26b7e80 00:10:07.489 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.747 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.005 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:08.005 "name": "raid_bdev1", 00:10:08.005 "uuid": "5a6ba1c1-b110-4175-af64-c84ba9ab3b04", 00:10:08.005 "strip_size_kb": 64, 00:10:08.005 "state": "online", 00:10:08.005 "raid_level": "raid0", 00:10:08.005 "superblock": true, 00:10:08.005 "num_base_bdevs": 2, 00:10:08.005 "num_base_bdevs_discovered": 2, 00:10:08.005 "num_base_bdevs_operational": 2, 00:10:08.005 "base_bdevs_list": [ 00:10:08.005 { 00:10:08.005 "name": "BaseBdev1", 00:10:08.005 "uuid": "55edc018-1af6-5142-a4f4-00c27f676d68", 00:10:08.005 "is_configured": true, 00:10:08.005 "data_offset": 2048, 00:10:08.005 "data_size": 63488 00:10:08.005 }, 00:10:08.005 { 00:10:08.005 "name": "BaseBdev2", 00:10:08.005 "uuid": "55e85459-567b-564e-a812-cf08a6148668", 00:10:08.005 "is_configured": true, 00:10:08.005 "data_offset": 2048, 00:10:08.005 "data_size": 63488 00:10:08.005 } 00:10:08.005 ] 00:10:08.005 }' 00:10:08.005 10:25:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:08.005 10:25:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.571 10:25:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:08.829 [2024-07-25 10:25:12.394247] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:08.829 [2024-07-25 10:25:12.394300] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.829 [2024-07-25 10:25:12.397310] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.829 [2024-07-25 10:25:12.397348] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.829 [2024-07-25 10:25:12.397381] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.829 [2024-07-25 10:25:12.397395] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x285a7d0 name raid_bdev1, state offline 00:10:08.829 0 00:10:08.829 10:25:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2336391 00:10:08.829 10:25:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2336391 ']' 00:10:08.829 10:25:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2336391 00:10:08.829 10:25:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:08.829 10:25:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.829 10:25:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2336391 00:10:08.829 10:25:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.829 10:25:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.829 10:25:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2336391' 00:10:08.829 killing process with pid 2336391 00:10:08.829 10:25:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2336391 00:10:08.829 [2024-07-25 10:25:12.445302] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.829 10:25:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2336391 00:10:08.829 [2024-07-25 10:25:12.459276] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.088 10:25:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.4CNv8bwc3d 00:10:09.088 10:25:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:09.088 10:25:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:09.088 10:25:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:10:09.088 10:25:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:09.088 10:25:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:09.088 10:25:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:09.088 10:25:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:10:09.088 00:10:09.088 real 0m6.123s 00:10:09.088 user 0m9.622s 00:10:09.088 sys 0m0.907s 00:10:09.088 10:25:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.088 10:25:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.088 ************************************ 00:10:09.088 END TEST raid_read_error_test 00:10:09.088 ************************************ 00:10:09.088 10:25:12 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:09.088 10:25:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:09.088 10:25:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.088 10:25:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.088 ************************************ 00:10:09.088 START TEST raid_write_error_test 00:10:09.088 ************************************ 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.XQFQh07GtT 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2337277 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2337277 /var/tmp/spdk-raid.sock 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2337277 ']' 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:09.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.088 10:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.346 [2024-07-25 10:25:12.837465] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:10:09.346 [2024-07-25 10:25:12.837533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337277 ] 00:10:09.346 [2024-07-25 10:25:12.913158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.346 [2024-07-25 10:25:13.023140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.604 [2024-07-25 10:25:13.093383] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.604 [2024-07-25 10:25:13.093425] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.174 10:25:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.174 10:25:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:10.174 10:25:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:10.174 10:25:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:10.432 BaseBdev1_malloc 00:10:10.432 10:25:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:10.690 true 00:10:10.690 10:25:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:10.948 [2024-07-25 10:25:14.527499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:10.948 [2024-07-25 10:25:14.527567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.948 [2024-07-25 10:25:14.527594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2078250 00:10:10.948 [2024-07-25 10:25:14.527608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.948 [2024-07-25 10:25:14.529351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.948 [2024-07-25 10:25:14.529375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:10.948 BaseBdev1 00:10:10.948 10:25:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:10.948 10:25:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:11.206 BaseBdev2_malloc 00:10:11.207 10:25:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:11.465 true 00:10:11.465 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:11.723 [2024-07-25 10:25:15.268302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:11.723 [2024-07-25 10:25:15.268368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.723 [2024-07-25 10:25:15.268405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2067650 00:10:11.723 [2024-07-25 10:25:15.268434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.723 [2024-07-25 10:25:15.270207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.723 [2024-07-25 10:25:15.270235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:11.723 BaseBdev2 00:10:11.723 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:11.981 [2024-07-25 10:25:15.517009] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.981 [2024-07-25 10:25:15.518340] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.981 [2024-07-25 10:25:15.518552] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x205e7d0 00:10:11.981 [2024-07-25 10:25:15.518568] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:11.981 [2024-07-25 10:25:15.518771] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ebb530 00:10:11.981 [2024-07-25 10:25:15.518934] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x205e7d0 00:10:11.981 [2024-07-25 10:25:15.518947] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x205e7d0 00:10:11.981 [2024-07-25 10:25:15.519071] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.981 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:11.981 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:11.981 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:11.981 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:11.981 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:11.981 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:11.981 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:11.981 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:11.981 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:11.981 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:11.981 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.981 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.239 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:12.239 "name": "raid_bdev1", 00:10:12.239 "uuid": "3913c88e-69a8-4583-832e-a29ee8b997c3", 00:10:12.239 "strip_size_kb": 64, 00:10:12.239 "state": "online", 00:10:12.239 "raid_level": "raid0", 00:10:12.239 "superblock": true, 00:10:12.239 "num_base_bdevs": 2, 00:10:12.239 "num_base_bdevs_discovered": 2, 00:10:12.239 "num_base_bdevs_operational": 2, 00:10:12.239 "base_bdevs_list": [ 00:10:12.239 { 00:10:12.239 "name": "BaseBdev1", 00:10:12.239 "uuid": "cabd1897-977a-50c6-bb1b-f47e724b0a5c", 00:10:12.239 "is_configured": true, 00:10:12.239 "data_offset": 2048, 00:10:12.239 "data_size": 63488 00:10:12.239 }, 00:10:12.239 { 00:10:12.239 "name": "BaseBdev2", 00:10:12.239 "uuid": "875618d1-fbd1-58b7-9f67-3dabc44f326c", 00:10:12.239 "is_configured": true, 00:10:12.239 "data_offset": 2048, 00:10:12.239 "data_size": 63488 00:10:12.239 } 00:10:12.239 ] 00:10:12.239 }' 00:10:12.239 10:25:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:12.239 10:25:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.805 10:25:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:12.805 10:25:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:12.805 [2024-07-25 10:25:16.431786] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ebbe80 00:10:13.739 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:13.998 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.256 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:14.256 "name": "raid_bdev1", 00:10:14.256 "uuid": "3913c88e-69a8-4583-832e-a29ee8b997c3", 00:10:14.256 "strip_size_kb": 64, 00:10:14.256 "state": "online", 00:10:14.256 "raid_level": "raid0", 00:10:14.256 "superblock": true, 00:10:14.256 "num_base_bdevs": 2, 00:10:14.256 "num_base_bdevs_discovered": 2, 00:10:14.256 "num_base_bdevs_operational": 2, 00:10:14.256 "base_bdevs_list": [ 00:10:14.256 { 00:10:14.256 "name": "BaseBdev1", 00:10:14.256 "uuid": "cabd1897-977a-50c6-bb1b-f47e724b0a5c", 00:10:14.256 "is_configured": true, 00:10:14.257 "data_offset": 2048, 00:10:14.257 "data_size": 63488 00:10:14.257 }, 00:10:14.257 { 00:10:14.257 "name": "BaseBdev2", 00:10:14.257 "uuid": "875618d1-fbd1-58b7-9f67-3dabc44f326c", 00:10:14.257 "is_configured": true, 00:10:14.257 "data_offset": 2048, 00:10:14.257 "data_size": 63488 00:10:14.257 } 00:10:14.257 ] 00:10:14.257 }' 00:10:14.257 10:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:14.257 10:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.822 10:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:15.080 [2024-07-25 10:25:18.614141] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.080 [2024-07-25 10:25:18.614196] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.080 [2024-07-25 10:25:18.617213] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.080 [2024-07-25 10:25:18.617251] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.080 [2024-07-25 10:25:18.617293] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.080 [2024-07-25 10:25:18.617308] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x205e7d0 name raid_bdev1, state offline 00:10:15.080 0 00:10:15.080 10:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2337277 00:10:15.080 10:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2337277 ']' 00:10:15.080 10:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2337277 00:10:15.080 10:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:15.080 10:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.080 10:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2337277 00:10:15.080 10:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.080 10:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.080 10:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2337277' 00:10:15.080 killing process with pid 2337277 00:10:15.080 10:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2337277 00:10:15.080 [2024-07-25 10:25:18.666365] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.080 10:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2337277 00:10:15.080 [2024-07-25 10:25:18.681670] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.338 10:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.XQFQh07GtT 00:10:15.338 10:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:15.338 10:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:15.338 10:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:10:15.338 10:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:15.338 10:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:15.338 10:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:15.338 10:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:10:15.338 00:10:15.338 real 0m6.195s 00:10:15.338 user 0m9.802s 00:10:15.338 sys 0m0.852s 00:10:15.338 10:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.338 10:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.338 ************************************ 00:10:15.338 END TEST raid_write_error_test 00:10:15.338 ************************************ 00:10:15.338 10:25:18 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:15.338 10:25:18 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:15.338 10:25:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:15.338 10:25:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.338 10:25:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.338 ************************************ 00:10:15.338 START TEST raid_state_function_test 00:10:15.338 ************************************ 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2338052 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2338052' 00:10:15.338 Process raid pid: 2338052 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2338052 /var/tmp/spdk-raid.sock 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2338052 ']' 00:10:15.338 10:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:15.339 10:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.339 10:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:15.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:15.339 10:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.339 10:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.597 [2024-07-25 10:25:19.080961] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:10:15.597 [2024-07-25 10:25:19.081028] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.597 [2024-07-25 10:25:19.156143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.597 [2024-07-25 10:25:19.279070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.855 [2024-07-25 10:25:19.349134] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.855 [2024-07-25 10:25:19.349188] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.422 10:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.422 10:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:16.422 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:16.680 [2024-07-25 10:25:20.238442] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.680 [2024-07-25 10:25:20.238498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.680 [2024-07-25 10:25:20.238511] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.680 [2024-07-25 10:25:20.238525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.680 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:16.680 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:16.680 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:16.680 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:16.680 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:16.680 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:16.680 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:16.680 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:16.680 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:16.680 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:16.680 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.680 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.938 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:16.938 "name": "Existed_Raid", 00:10:16.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.938 "strip_size_kb": 64, 00:10:16.938 "state": "configuring", 00:10:16.938 "raid_level": "concat", 00:10:16.938 "superblock": false, 00:10:16.938 "num_base_bdevs": 2, 00:10:16.938 "num_base_bdevs_discovered": 0, 00:10:16.938 "num_base_bdevs_operational": 2, 00:10:16.938 "base_bdevs_list": [ 00:10:16.938 { 00:10:16.938 "name": "BaseBdev1", 00:10:16.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.938 "is_configured": false, 00:10:16.938 "data_offset": 0, 00:10:16.938 "data_size": 0 00:10:16.938 }, 00:10:16.938 { 00:10:16.938 "name": "BaseBdev2", 00:10:16.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.938 "is_configured": false, 00:10:16.938 "data_offset": 0, 00:10:16.938 "data_size": 0 00:10:16.938 } 00:10:16.938 ] 00:10:16.938 }' 00:10:16.938 10:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:16.938 10:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.504 10:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:17.766 [2024-07-25 10:25:21.313150] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.766 [2024-07-25 10:25:21.313187] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x865600 name Existed_Raid, state configuring 00:10:17.766 10:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:18.068 [2024-07-25 10:25:21.569856] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.068 [2024-07-25 10:25:21.569902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.068 [2024-07-25 10:25:21.569914] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.068 [2024-07-25 10:25:21.569927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.068 10:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.326 [2024-07-25 10:25:21.826729] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.326 BaseBdev1 00:10:18.326 10:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:18.326 10:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:18.326 10:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.326 10:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:18.326 10:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.326 10:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.326 10:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:18.583 10:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.840 [ 00:10:18.840 { 00:10:18.840 "name": "BaseBdev1", 00:10:18.840 "aliases": [ 00:10:18.840 "ee84be84-ddda-45f3-955c-11e4cbdcbdb1" 00:10:18.840 ], 00:10:18.840 "product_name": "Malloc disk", 00:10:18.840 "block_size": 512, 00:10:18.840 "num_blocks": 65536, 00:10:18.840 "uuid": "ee84be84-ddda-45f3-955c-11e4cbdcbdb1", 00:10:18.840 "assigned_rate_limits": { 00:10:18.840 "rw_ios_per_sec": 0, 00:10:18.840 "rw_mbytes_per_sec": 0, 00:10:18.840 "r_mbytes_per_sec": 0, 00:10:18.840 "w_mbytes_per_sec": 0 00:10:18.840 }, 00:10:18.840 "claimed": true, 00:10:18.840 "claim_type": "exclusive_write", 00:10:18.840 "zoned": false, 00:10:18.840 "supported_io_types": { 00:10:18.840 "read": true, 00:10:18.840 "write": true, 00:10:18.840 "unmap": true, 00:10:18.840 "flush": true, 00:10:18.840 "reset": true, 00:10:18.840 "nvme_admin": false, 00:10:18.840 "nvme_io": false, 00:10:18.840 "nvme_io_md": false, 00:10:18.840 "write_zeroes": true, 00:10:18.840 "zcopy": true, 00:10:18.840 "get_zone_info": false, 00:10:18.840 "zone_management": false, 00:10:18.840 "zone_append": false, 00:10:18.840 "compare": false, 00:10:18.840 "compare_and_write": false, 00:10:18.840 "abort": true, 00:10:18.840 "seek_hole": false, 00:10:18.840 "seek_data": false, 00:10:18.840 "copy": true, 00:10:18.840 "nvme_iov_md": false 00:10:18.840 }, 00:10:18.840 "memory_domains": [ 00:10:18.840 { 00:10:18.840 "dma_device_id": "system", 00:10:18.840 "dma_device_type": 1 00:10:18.840 }, 00:10:18.840 { 00:10:18.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.840 "dma_device_type": 2 00:10:18.840 } 00:10:18.840 ], 00:10:18.840 "driver_specific": {} 00:10:18.840 } 00:10:18.840 ] 00:10:18.840 10:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.840 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:18.840 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:18.840 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:18.840 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:18.840 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:18.840 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:18.840 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:18.840 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:18.840 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:18.840 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:18.840 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.840 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.097 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:19.097 "name": "Existed_Raid", 00:10:19.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.097 "strip_size_kb": 64, 00:10:19.097 "state": "configuring", 00:10:19.097 "raid_level": "concat", 00:10:19.097 "superblock": false, 00:10:19.097 "num_base_bdevs": 2, 00:10:19.097 "num_base_bdevs_discovered": 1, 00:10:19.097 "num_base_bdevs_operational": 2, 00:10:19.097 "base_bdevs_list": [ 00:10:19.097 { 00:10:19.097 "name": "BaseBdev1", 00:10:19.097 "uuid": "ee84be84-ddda-45f3-955c-11e4cbdcbdb1", 00:10:19.097 "is_configured": true, 00:10:19.097 "data_offset": 0, 00:10:19.097 "data_size": 65536 00:10:19.097 }, 00:10:19.097 { 00:10:19.097 "name": "BaseBdev2", 00:10:19.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.097 "is_configured": false, 00:10:19.097 "data_offset": 0, 00:10:19.097 "data_size": 0 00:10:19.097 } 00:10:19.097 ] 00:10:19.097 }' 00:10:19.097 10:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:19.097 10:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.661 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:19.661 [2024-07-25 10:25:23.366809] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.661 [2024-07-25 10:25:23.366867] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x864e50 name Existed_Raid, state configuring 00:10:19.917 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:19.917 [2024-07-25 10:25:23.607490] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.917 [2024-07-25 10:25:23.609087] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:19.917 [2024-07-25 10:25:23.609132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.173 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.430 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:20.430 "name": "Existed_Raid", 00:10:20.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.430 "strip_size_kb": 64, 00:10:20.430 "state": "configuring", 00:10:20.430 "raid_level": "concat", 00:10:20.430 "superblock": false, 00:10:20.430 "num_base_bdevs": 2, 00:10:20.430 "num_base_bdevs_discovered": 1, 00:10:20.430 "num_base_bdevs_operational": 2, 00:10:20.430 "base_bdevs_list": [ 00:10:20.430 { 00:10:20.430 "name": "BaseBdev1", 00:10:20.430 "uuid": "ee84be84-ddda-45f3-955c-11e4cbdcbdb1", 00:10:20.430 "is_configured": true, 00:10:20.430 "data_offset": 0, 00:10:20.430 "data_size": 65536 00:10:20.430 }, 00:10:20.430 { 00:10:20.430 "name": "BaseBdev2", 00:10:20.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.430 "is_configured": false, 00:10:20.430 "data_offset": 0, 00:10:20.430 "data_size": 0 00:10:20.430 } 00:10:20.430 ] 00:10:20.430 }' 00:10:20.430 10:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:20.430 10:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.995 10:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:20.995 [2024-07-25 10:25:24.679961] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.995 [2024-07-25 10:25:24.680011] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x865b40 00:10:20.995 [2024-07-25 10:25:24.680019] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:20.995 [2024-07-25 10:25:24.680194] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x85f950 00:10:20.995 [2024-07-25 10:25:24.680322] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x865b40 00:10:20.995 [2024-07-25 10:25:24.680335] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x865b40 00:10:20.995 [2024-07-25 10:25:24.680535] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.995 BaseBdev2 00:10:20.995 10:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:20.995 10:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:20.995 10:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:20.995 10:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:20.995 10:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:20.995 10:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:20.995 10:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:21.253 10:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:21.511 [ 00:10:21.511 { 00:10:21.511 "name": "BaseBdev2", 00:10:21.511 "aliases": [ 00:10:21.511 "776614cc-ce5b-4cc1-9802-f68a750124ba" 00:10:21.511 ], 00:10:21.511 "product_name": "Malloc disk", 00:10:21.511 "block_size": 512, 00:10:21.511 "num_blocks": 65536, 00:10:21.511 "uuid": "776614cc-ce5b-4cc1-9802-f68a750124ba", 00:10:21.511 "assigned_rate_limits": { 00:10:21.511 "rw_ios_per_sec": 0, 00:10:21.511 "rw_mbytes_per_sec": 0, 00:10:21.511 "r_mbytes_per_sec": 0, 00:10:21.511 "w_mbytes_per_sec": 0 00:10:21.511 }, 00:10:21.511 "claimed": true, 00:10:21.511 "claim_type": "exclusive_write", 00:10:21.511 "zoned": false, 00:10:21.511 "supported_io_types": { 00:10:21.511 "read": true, 00:10:21.511 "write": true, 00:10:21.511 "unmap": true, 00:10:21.511 "flush": true, 00:10:21.511 "reset": true, 00:10:21.511 "nvme_admin": false, 00:10:21.511 "nvme_io": false, 00:10:21.511 "nvme_io_md": false, 00:10:21.511 "write_zeroes": true, 00:10:21.511 "zcopy": true, 00:10:21.511 "get_zone_info": false, 00:10:21.511 "zone_management": false, 00:10:21.511 "zone_append": false, 00:10:21.511 "compare": false, 00:10:21.511 "compare_and_write": false, 00:10:21.511 "abort": true, 00:10:21.511 "seek_hole": false, 00:10:21.511 "seek_data": false, 00:10:21.511 "copy": true, 00:10:21.511 "nvme_iov_md": false 00:10:21.511 }, 00:10:21.511 "memory_domains": [ 00:10:21.511 { 00:10:21.511 "dma_device_id": "system", 00:10:21.511 "dma_device_type": 1 00:10:21.511 }, 00:10:21.511 { 00:10:21.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.511 "dma_device_type": 2 00:10:21.511 } 00:10:21.511 ], 00:10:21.511 "driver_specific": {} 00:10:21.511 } 00:10:21.511 ] 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.511 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.769 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:21.769 "name": "Existed_Raid", 00:10:21.769 "uuid": "d0fba1ed-852a-4d68-9c6c-36b6a147a2d0", 00:10:21.769 "strip_size_kb": 64, 00:10:21.769 "state": "online", 00:10:21.769 "raid_level": "concat", 00:10:21.769 "superblock": false, 00:10:21.769 "num_base_bdevs": 2, 00:10:21.769 "num_base_bdevs_discovered": 2, 00:10:21.769 "num_base_bdevs_operational": 2, 00:10:21.769 "base_bdevs_list": [ 00:10:21.769 { 00:10:21.769 "name": "BaseBdev1", 00:10:21.769 "uuid": "ee84be84-ddda-45f3-955c-11e4cbdcbdb1", 00:10:21.769 "is_configured": true, 00:10:21.769 "data_offset": 0, 00:10:21.769 "data_size": 65536 00:10:21.769 }, 00:10:21.769 { 00:10:21.769 "name": "BaseBdev2", 00:10:21.769 "uuid": "776614cc-ce5b-4cc1-9802-f68a750124ba", 00:10:21.769 "is_configured": true, 00:10:21.769 "data_offset": 0, 00:10:21.769 "data_size": 65536 00:10:21.769 } 00:10:21.769 ] 00:10:21.769 }' 00:10:21.769 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:21.769 10:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.333 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:22.334 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:22.334 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:22.334 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:22.334 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:22.334 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:22.334 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:22.334 10:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:22.590 [2024-07-25 10:25:26.200287] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.590 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:22.590 "name": "Existed_Raid", 00:10:22.590 "aliases": [ 00:10:22.590 "d0fba1ed-852a-4d68-9c6c-36b6a147a2d0" 00:10:22.590 ], 00:10:22.590 "product_name": "Raid Volume", 00:10:22.591 "block_size": 512, 00:10:22.591 "num_blocks": 131072, 00:10:22.591 "uuid": "d0fba1ed-852a-4d68-9c6c-36b6a147a2d0", 00:10:22.591 "assigned_rate_limits": { 00:10:22.591 "rw_ios_per_sec": 0, 00:10:22.591 "rw_mbytes_per_sec": 0, 00:10:22.591 "r_mbytes_per_sec": 0, 00:10:22.591 "w_mbytes_per_sec": 0 00:10:22.591 }, 00:10:22.591 "claimed": false, 00:10:22.591 "zoned": false, 00:10:22.591 "supported_io_types": { 00:10:22.591 "read": true, 00:10:22.591 "write": true, 00:10:22.591 "unmap": true, 00:10:22.591 "flush": true, 00:10:22.591 "reset": true, 00:10:22.591 "nvme_admin": false, 00:10:22.591 "nvme_io": false, 00:10:22.591 "nvme_io_md": false, 00:10:22.591 "write_zeroes": true, 00:10:22.591 "zcopy": false, 00:10:22.591 "get_zone_info": false, 00:10:22.591 "zone_management": false, 00:10:22.591 "zone_append": false, 00:10:22.591 "compare": false, 00:10:22.591 "compare_and_write": false, 00:10:22.591 "abort": false, 00:10:22.591 "seek_hole": false, 00:10:22.591 "seek_data": false, 00:10:22.591 "copy": false, 00:10:22.591 "nvme_iov_md": false 00:10:22.591 }, 00:10:22.591 "memory_domains": [ 00:10:22.591 { 00:10:22.591 "dma_device_id": "system", 00:10:22.591 "dma_device_type": 1 00:10:22.591 }, 00:10:22.591 { 00:10:22.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.591 "dma_device_type": 2 00:10:22.591 }, 00:10:22.591 { 00:10:22.591 "dma_device_id": "system", 00:10:22.591 "dma_device_type": 1 00:10:22.591 }, 00:10:22.591 { 00:10:22.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.591 "dma_device_type": 2 00:10:22.591 } 00:10:22.591 ], 00:10:22.591 "driver_specific": { 00:10:22.591 "raid": { 00:10:22.591 "uuid": "d0fba1ed-852a-4d68-9c6c-36b6a147a2d0", 00:10:22.591 "strip_size_kb": 64, 00:10:22.591 "state": "online", 00:10:22.591 "raid_level": "concat", 00:10:22.591 "superblock": false, 00:10:22.591 "num_base_bdevs": 2, 00:10:22.591 "num_base_bdevs_discovered": 2, 00:10:22.591 "num_base_bdevs_operational": 2, 00:10:22.591 "base_bdevs_list": [ 00:10:22.591 { 00:10:22.591 "name": "BaseBdev1", 00:10:22.591 "uuid": "ee84be84-ddda-45f3-955c-11e4cbdcbdb1", 00:10:22.591 "is_configured": true, 00:10:22.591 "data_offset": 0, 00:10:22.591 "data_size": 65536 00:10:22.591 }, 00:10:22.591 { 00:10:22.591 "name": "BaseBdev2", 00:10:22.591 "uuid": "776614cc-ce5b-4cc1-9802-f68a750124ba", 00:10:22.591 "is_configured": true, 00:10:22.591 "data_offset": 0, 00:10:22.591 "data_size": 65536 00:10:22.591 } 00:10:22.591 ] 00:10:22.591 } 00:10:22.591 } 00:10:22.591 }' 00:10:22.591 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.591 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:22.591 BaseBdev2' 00:10:22.591 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:22.591 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:22.591 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:22.848 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:22.848 "name": "BaseBdev1", 00:10:22.848 "aliases": [ 00:10:22.848 "ee84be84-ddda-45f3-955c-11e4cbdcbdb1" 00:10:22.848 ], 00:10:22.848 "product_name": "Malloc disk", 00:10:22.848 "block_size": 512, 00:10:22.848 "num_blocks": 65536, 00:10:22.848 "uuid": "ee84be84-ddda-45f3-955c-11e4cbdcbdb1", 00:10:22.848 "assigned_rate_limits": { 00:10:22.848 "rw_ios_per_sec": 0, 00:10:22.848 "rw_mbytes_per_sec": 0, 00:10:22.848 "r_mbytes_per_sec": 0, 00:10:22.848 "w_mbytes_per_sec": 0 00:10:22.848 }, 00:10:22.848 "claimed": true, 00:10:22.848 "claim_type": "exclusive_write", 00:10:22.848 "zoned": false, 00:10:22.848 "supported_io_types": { 00:10:22.848 "read": true, 00:10:22.848 "write": true, 00:10:22.848 "unmap": true, 00:10:22.848 "flush": true, 00:10:22.848 "reset": true, 00:10:22.848 "nvme_admin": false, 00:10:22.848 "nvme_io": false, 00:10:22.848 "nvme_io_md": false, 00:10:22.848 "write_zeroes": true, 00:10:22.848 "zcopy": true, 00:10:22.848 "get_zone_info": false, 00:10:22.848 "zone_management": false, 00:10:22.848 "zone_append": false, 00:10:22.848 "compare": false, 00:10:22.848 "compare_and_write": false, 00:10:22.848 "abort": true, 00:10:22.848 "seek_hole": false, 00:10:22.848 "seek_data": false, 00:10:22.848 "copy": true, 00:10:22.848 "nvme_iov_md": false 00:10:22.848 }, 00:10:22.848 "memory_domains": [ 00:10:22.848 { 00:10:22.848 "dma_device_id": "system", 00:10:22.848 "dma_device_type": 1 00:10:22.848 }, 00:10:22.848 { 00:10:22.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.848 "dma_device_type": 2 00:10:22.848 } 00:10:22.848 ], 00:10:22.848 "driver_specific": {} 00:10:22.848 }' 00:10:22.848 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.105 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.105 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:23.105 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.105 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.105 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:23.105 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.105 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.105 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:23.105 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:23.105 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:23.363 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:23.363 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:23.363 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:23.363 10:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:23.620 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:23.620 "name": "BaseBdev2", 00:10:23.620 "aliases": [ 00:10:23.620 "776614cc-ce5b-4cc1-9802-f68a750124ba" 00:10:23.620 ], 00:10:23.620 "product_name": "Malloc disk", 00:10:23.620 "block_size": 512, 00:10:23.620 "num_blocks": 65536, 00:10:23.620 "uuid": "776614cc-ce5b-4cc1-9802-f68a750124ba", 00:10:23.620 "assigned_rate_limits": { 00:10:23.620 "rw_ios_per_sec": 0, 00:10:23.620 "rw_mbytes_per_sec": 0, 00:10:23.620 "r_mbytes_per_sec": 0, 00:10:23.620 "w_mbytes_per_sec": 0 00:10:23.620 }, 00:10:23.620 "claimed": true, 00:10:23.620 "claim_type": "exclusive_write", 00:10:23.620 "zoned": false, 00:10:23.620 "supported_io_types": { 00:10:23.620 "read": true, 00:10:23.620 "write": true, 00:10:23.620 "unmap": true, 00:10:23.620 "flush": true, 00:10:23.620 "reset": true, 00:10:23.620 "nvme_admin": false, 00:10:23.620 "nvme_io": false, 00:10:23.620 "nvme_io_md": false, 00:10:23.620 "write_zeroes": true, 00:10:23.620 "zcopy": true, 00:10:23.620 "get_zone_info": false, 00:10:23.620 "zone_management": false, 00:10:23.620 "zone_append": false, 00:10:23.620 "compare": false, 00:10:23.620 "compare_and_write": false, 00:10:23.620 "abort": true, 00:10:23.620 "seek_hole": false, 00:10:23.620 "seek_data": false, 00:10:23.620 "copy": true, 00:10:23.620 "nvme_iov_md": false 00:10:23.620 }, 00:10:23.620 "memory_domains": [ 00:10:23.620 { 00:10:23.620 "dma_device_id": "system", 00:10:23.620 "dma_device_type": 1 00:10:23.620 }, 00:10:23.620 { 00:10:23.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.620 "dma_device_type": 2 00:10:23.620 } 00:10:23.620 ], 00:10:23.620 "driver_specific": {} 00:10:23.620 }' 00:10:23.620 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.620 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.620 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:23.620 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.620 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.620 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:23.620 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.620 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.620 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:23.620 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:23.878 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:23.878 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:23.878 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:24.136 [2024-07-25 10:25:27.667969] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:24.136 [2024-07-25 10:25:27.668006] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.136 [2024-07-25 10:25:27.668065] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.136 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.393 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:24.393 "name": "Existed_Raid", 00:10:24.393 "uuid": "d0fba1ed-852a-4d68-9c6c-36b6a147a2d0", 00:10:24.393 "strip_size_kb": 64, 00:10:24.393 "state": "offline", 00:10:24.393 "raid_level": "concat", 00:10:24.393 "superblock": false, 00:10:24.393 "num_base_bdevs": 2, 00:10:24.393 "num_base_bdevs_discovered": 1, 00:10:24.393 "num_base_bdevs_operational": 1, 00:10:24.393 "base_bdevs_list": [ 00:10:24.393 { 00:10:24.393 "name": null, 00:10:24.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.393 "is_configured": false, 00:10:24.393 "data_offset": 0, 00:10:24.393 "data_size": 65536 00:10:24.394 }, 00:10:24.394 { 00:10:24.394 "name": "BaseBdev2", 00:10:24.394 "uuid": "776614cc-ce5b-4cc1-9802-f68a750124ba", 00:10:24.394 "is_configured": true, 00:10:24.394 "data_offset": 0, 00:10:24.394 "data_size": 65536 00:10:24.394 } 00:10:24.394 ] 00:10:24.394 }' 00:10:24.394 10:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:24.394 10:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.960 10:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:24.960 10:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:24.960 10:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.960 10:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:25.218 10:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:25.218 10:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.218 10:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:25.476 [2024-07-25 10:25:28.990788] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:25.476 [2024-07-25 10:25:28.990854] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x865b40 name Existed_Raid, state offline 00:10:25.476 10:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:25.476 10:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:25.476 10:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.476 10:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2338052 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2338052 ']' 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2338052 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2338052 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2338052' 00:10:25.734 killing process with pid 2338052 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2338052 00:10:25.734 [2024-07-25 10:25:29.335924] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.734 10:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2338052 00:10:25.734 [2024-07-25 10:25:29.336977] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:25.992 00:10:25.992 real 0m10.587s 00:10:25.992 user 0m19.114s 00:10:25.992 sys 0m1.509s 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.992 ************************************ 00:10:25.992 END TEST raid_state_function_test 00:10:25.992 ************************************ 00:10:25.992 10:25:29 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:25.992 10:25:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:25.992 10:25:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.992 10:25:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.992 ************************************ 00:10:25.992 START TEST raid_state_function_test_sb 00:10:25.992 ************************************ 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:25.992 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2339604 00:10:25.993 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:25.993 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2339604' 00:10:25.993 Process raid pid: 2339604 00:10:25.993 10:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2339604 /var/tmp/spdk-raid.sock 00:10:25.993 10:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2339604 ']' 00:10:25.993 10:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:25.993 10:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.993 10:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:25.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:25.993 10:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.993 10:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.252 [2024-07-25 10:25:29.716737] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:10:26.252 [2024-07-25 10:25:29.716820] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.252 [2024-07-25 10:25:29.793404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.252 [2024-07-25 10:25:29.905349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.510 [2024-07-25 10:25:29.979092] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.510 [2024-07-25 10:25:29.979158] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.075 10:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:27.075 10:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:27.075 10:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:27.333 [2024-07-25 10:25:30.911636] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.333 [2024-07-25 10:25:30.911683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.333 [2024-07-25 10:25:30.911696] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.333 [2024-07-25 10:25:30.911710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.333 10:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:27.333 10:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:27.333 10:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:27.333 10:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:27.333 10:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:27.333 10:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:27.333 10:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:27.333 10:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:27.333 10:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:27.333 10:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:27.333 10:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.333 10:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.590 10:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.590 "name": "Existed_Raid", 00:10:27.590 "uuid": "3d1fe18b-6400-48dc-a408-c70974050ae4", 00:10:27.590 "strip_size_kb": 64, 00:10:27.590 "state": "configuring", 00:10:27.590 "raid_level": "concat", 00:10:27.590 "superblock": true, 00:10:27.590 "num_base_bdevs": 2, 00:10:27.590 "num_base_bdevs_discovered": 0, 00:10:27.590 "num_base_bdevs_operational": 2, 00:10:27.590 "base_bdevs_list": [ 00:10:27.590 { 00:10:27.590 "name": "BaseBdev1", 00:10:27.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.590 "is_configured": false, 00:10:27.590 "data_offset": 0, 00:10:27.590 "data_size": 0 00:10:27.590 }, 00:10:27.590 { 00:10:27.590 "name": "BaseBdev2", 00:10:27.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.590 "is_configured": false, 00:10:27.591 "data_offset": 0, 00:10:27.591 "data_size": 0 00:10:27.591 } 00:10:27.591 ] 00:10:27.591 }' 00:10:27.591 10:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.591 10:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.155 10:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:28.413 [2024-07-25 10:25:31.950238] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.413 [2024-07-25 10:25:31.950270] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x919600 name Existed_Raid, state configuring 00:10:28.413 10:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:28.671 [2024-07-25 10:25:32.178838] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:28.671 [2024-07-25 10:25:32.178866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:28.671 [2024-07-25 10:25:32.178890] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:28.671 [2024-07-25 10:25:32.178900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:28.671 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:28.930 [2024-07-25 10:25:32.421594] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.930 BaseBdev1 00:10:28.930 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:28.930 10:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:28.930 10:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.930 10:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:28.930 10:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.930 10:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.930 10:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:29.188 10:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:29.445 [ 00:10:29.445 { 00:10:29.445 "name": "BaseBdev1", 00:10:29.445 "aliases": [ 00:10:29.445 "1183b353-4017-4dd2-ace3-2f8a5dd76539" 00:10:29.445 ], 00:10:29.445 "product_name": "Malloc disk", 00:10:29.445 "block_size": 512, 00:10:29.445 "num_blocks": 65536, 00:10:29.445 "uuid": "1183b353-4017-4dd2-ace3-2f8a5dd76539", 00:10:29.445 "assigned_rate_limits": { 00:10:29.445 "rw_ios_per_sec": 0, 00:10:29.445 "rw_mbytes_per_sec": 0, 00:10:29.445 "r_mbytes_per_sec": 0, 00:10:29.445 "w_mbytes_per_sec": 0 00:10:29.445 }, 00:10:29.445 "claimed": true, 00:10:29.445 "claim_type": "exclusive_write", 00:10:29.445 "zoned": false, 00:10:29.445 "supported_io_types": { 00:10:29.445 "read": true, 00:10:29.445 "write": true, 00:10:29.445 "unmap": true, 00:10:29.445 "flush": true, 00:10:29.445 "reset": true, 00:10:29.445 "nvme_admin": false, 00:10:29.445 "nvme_io": false, 00:10:29.445 "nvme_io_md": false, 00:10:29.445 "write_zeroes": true, 00:10:29.445 "zcopy": true, 00:10:29.445 "get_zone_info": false, 00:10:29.445 "zone_management": false, 00:10:29.445 "zone_append": false, 00:10:29.445 "compare": false, 00:10:29.445 "compare_and_write": false, 00:10:29.445 "abort": true, 00:10:29.445 "seek_hole": false, 00:10:29.445 "seek_data": false, 00:10:29.445 "copy": true, 00:10:29.445 "nvme_iov_md": false 00:10:29.445 }, 00:10:29.445 "memory_domains": [ 00:10:29.445 { 00:10:29.445 "dma_device_id": "system", 00:10:29.445 "dma_device_type": 1 00:10:29.445 }, 00:10:29.445 { 00:10:29.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.445 "dma_device_type": 2 00:10:29.445 } 00:10:29.445 ], 00:10:29.445 "driver_specific": {} 00:10:29.445 } 00:10:29.445 ] 00:10:29.445 10:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:29.445 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:29.445 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:29.445 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:29.445 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:29.445 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:29.445 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:29.445 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:29.445 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:29.445 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:29.445 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:29.445 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.445 10:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.703 10:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:29.703 "name": "Existed_Raid", 00:10:29.703 "uuid": "40930aa0-c174-4861-9430-91212a09ebf3", 00:10:29.703 "strip_size_kb": 64, 00:10:29.703 "state": "configuring", 00:10:29.703 "raid_level": "concat", 00:10:29.703 "superblock": true, 00:10:29.703 "num_base_bdevs": 2, 00:10:29.703 "num_base_bdevs_discovered": 1, 00:10:29.703 "num_base_bdevs_operational": 2, 00:10:29.703 "base_bdevs_list": [ 00:10:29.703 { 00:10:29.703 "name": "BaseBdev1", 00:10:29.703 "uuid": "1183b353-4017-4dd2-ace3-2f8a5dd76539", 00:10:29.703 "is_configured": true, 00:10:29.703 "data_offset": 2048, 00:10:29.703 "data_size": 63488 00:10:29.703 }, 00:10:29.703 { 00:10:29.703 "name": "BaseBdev2", 00:10:29.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.703 "is_configured": false, 00:10:29.703 "data_offset": 0, 00:10:29.703 "data_size": 0 00:10:29.703 } 00:10:29.703 ] 00:10:29.703 }' 00:10:29.703 10:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:29.703 10:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.268 10:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:30.268 [2024-07-25 10:25:33.953597] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.268 [2024-07-25 10:25:33.953639] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x918e50 name Existed_Raid, state configuring 00:10:30.268 10:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:30.526 [2024-07-25 10:25:34.194263] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.526 [2024-07-25 10:25:34.195516] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.526 [2024-07-25 10:25:34.195543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.526 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.783 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:30.784 "name": "Existed_Raid", 00:10:30.784 "uuid": "2d29c6db-3e96-410d-a567-2fd310b9fca8", 00:10:30.784 "strip_size_kb": 64, 00:10:30.784 "state": "configuring", 00:10:30.784 "raid_level": "concat", 00:10:30.784 "superblock": true, 00:10:30.784 "num_base_bdevs": 2, 00:10:30.784 "num_base_bdevs_discovered": 1, 00:10:30.784 "num_base_bdevs_operational": 2, 00:10:30.784 "base_bdevs_list": [ 00:10:30.784 { 00:10:30.784 "name": "BaseBdev1", 00:10:30.784 "uuid": "1183b353-4017-4dd2-ace3-2f8a5dd76539", 00:10:30.784 "is_configured": true, 00:10:30.784 "data_offset": 2048, 00:10:30.784 "data_size": 63488 00:10:30.784 }, 00:10:30.784 { 00:10:30.784 "name": "BaseBdev2", 00:10:30.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.784 "is_configured": false, 00:10:30.784 "data_offset": 0, 00:10:30.784 "data_size": 0 00:10:30.784 } 00:10:30.784 ] 00:10:30.784 }' 00:10:30.784 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:30.784 10:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.349 10:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.607 [2024-07-25 10:25:35.222702] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.607 [2024-07-25 10:25:35.222902] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x919b40 00:10:31.607 [2024-07-25 10:25:35.222917] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:31.607 [2024-07-25 10:25:35.223055] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x91a8e0 00:10:31.607 [2024-07-25 10:25:35.223198] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x919b40 00:10:31.607 [2024-07-25 10:25:35.223216] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x919b40 00:10:31.607 [2024-07-25 10:25:35.223307] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.607 BaseBdev2 00:10:31.607 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:31.607 10:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:31.607 10:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.607 10:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.607 10:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.607 10:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.607 10:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:31.866 10:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:32.124 [ 00:10:32.124 { 00:10:32.124 "name": "BaseBdev2", 00:10:32.124 "aliases": [ 00:10:32.124 "61333768-6fb6-47fe-8f8c-07928e562799" 00:10:32.124 ], 00:10:32.124 "product_name": "Malloc disk", 00:10:32.124 "block_size": 512, 00:10:32.124 "num_blocks": 65536, 00:10:32.124 "uuid": "61333768-6fb6-47fe-8f8c-07928e562799", 00:10:32.124 "assigned_rate_limits": { 00:10:32.124 "rw_ios_per_sec": 0, 00:10:32.124 "rw_mbytes_per_sec": 0, 00:10:32.124 "r_mbytes_per_sec": 0, 00:10:32.124 "w_mbytes_per_sec": 0 00:10:32.124 }, 00:10:32.124 "claimed": true, 00:10:32.124 "claim_type": "exclusive_write", 00:10:32.124 "zoned": false, 00:10:32.124 "supported_io_types": { 00:10:32.124 "read": true, 00:10:32.124 "write": true, 00:10:32.124 "unmap": true, 00:10:32.124 "flush": true, 00:10:32.124 "reset": true, 00:10:32.124 "nvme_admin": false, 00:10:32.124 "nvme_io": false, 00:10:32.124 "nvme_io_md": false, 00:10:32.124 "write_zeroes": true, 00:10:32.124 "zcopy": true, 00:10:32.124 "get_zone_info": false, 00:10:32.124 "zone_management": false, 00:10:32.124 "zone_append": false, 00:10:32.124 "compare": false, 00:10:32.124 "compare_and_write": false, 00:10:32.124 "abort": true, 00:10:32.124 "seek_hole": false, 00:10:32.124 "seek_data": false, 00:10:32.124 "copy": true, 00:10:32.124 "nvme_iov_md": false 00:10:32.124 }, 00:10:32.124 "memory_domains": [ 00:10:32.124 { 00:10:32.124 "dma_device_id": "system", 00:10:32.124 "dma_device_type": 1 00:10:32.124 }, 00:10:32.124 { 00:10:32.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.124 "dma_device_type": 2 00:10:32.124 } 00:10:32.124 ], 00:10:32.124 "driver_specific": {} 00:10:32.124 } 00:10:32.124 ] 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.124 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.382 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:32.382 "name": "Existed_Raid", 00:10:32.382 "uuid": "2d29c6db-3e96-410d-a567-2fd310b9fca8", 00:10:32.382 "strip_size_kb": 64, 00:10:32.382 "state": "online", 00:10:32.382 "raid_level": "concat", 00:10:32.382 "superblock": true, 00:10:32.382 "num_base_bdevs": 2, 00:10:32.382 "num_base_bdevs_discovered": 2, 00:10:32.382 "num_base_bdevs_operational": 2, 00:10:32.382 "base_bdevs_list": [ 00:10:32.382 { 00:10:32.382 "name": "BaseBdev1", 00:10:32.382 "uuid": "1183b353-4017-4dd2-ace3-2f8a5dd76539", 00:10:32.382 "is_configured": true, 00:10:32.382 "data_offset": 2048, 00:10:32.382 "data_size": 63488 00:10:32.382 }, 00:10:32.382 { 00:10:32.382 "name": "BaseBdev2", 00:10:32.382 "uuid": "61333768-6fb6-47fe-8f8c-07928e562799", 00:10:32.382 "is_configured": true, 00:10:32.382 "data_offset": 2048, 00:10:32.382 "data_size": 63488 00:10:32.382 } 00:10:32.382 ] 00:10:32.382 }' 00:10:32.382 10:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:32.382 10:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.951 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:32.951 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:32.951 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:32.951 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:32.951 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:32.951 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:32.951 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:32.951 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:33.209 [2024-07-25 10:25:36.730871] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.209 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:33.209 "name": "Existed_Raid", 00:10:33.209 "aliases": [ 00:10:33.209 "2d29c6db-3e96-410d-a567-2fd310b9fca8" 00:10:33.209 ], 00:10:33.209 "product_name": "Raid Volume", 00:10:33.209 "block_size": 512, 00:10:33.209 "num_blocks": 126976, 00:10:33.209 "uuid": "2d29c6db-3e96-410d-a567-2fd310b9fca8", 00:10:33.209 "assigned_rate_limits": { 00:10:33.209 "rw_ios_per_sec": 0, 00:10:33.209 "rw_mbytes_per_sec": 0, 00:10:33.209 "r_mbytes_per_sec": 0, 00:10:33.209 "w_mbytes_per_sec": 0 00:10:33.209 }, 00:10:33.209 "claimed": false, 00:10:33.209 "zoned": false, 00:10:33.209 "supported_io_types": { 00:10:33.209 "read": true, 00:10:33.209 "write": true, 00:10:33.209 "unmap": true, 00:10:33.209 "flush": true, 00:10:33.209 "reset": true, 00:10:33.209 "nvme_admin": false, 00:10:33.209 "nvme_io": false, 00:10:33.209 "nvme_io_md": false, 00:10:33.209 "write_zeroes": true, 00:10:33.209 "zcopy": false, 00:10:33.209 "get_zone_info": false, 00:10:33.209 "zone_management": false, 00:10:33.209 "zone_append": false, 00:10:33.209 "compare": false, 00:10:33.209 "compare_and_write": false, 00:10:33.209 "abort": false, 00:10:33.209 "seek_hole": false, 00:10:33.209 "seek_data": false, 00:10:33.209 "copy": false, 00:10:33.209 "nvme_iov_md": false 00:10:33.209 }, 00:10:33.209 "memory_domains": [ 00:10:33.209 { 00:10:33.209 "dma_device_id": "system", 00:10:33.209 "dma_device_type": 1 00:10:33.209 }, 00:10:33.209 { 00:10:33.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.209 "dma_device_type": 2 00:10:33.209 }, 00:10:33.209 { 00:10:33.209 "dma_device_id": "system", 00:10:33.209 "dma_device_type": 1 00:10:33.209 }, 00:10:33.209 { 00:10:33.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.209 "dma_device_type": 2 00:10:33.209 } 00:10:33.209 ], 00:10:33.209 "driver_specific": { 00:10:33.209 "raid": { 00:10:33.209 "uuid": "2d29c6db-3e96-410d-a567-2fd310b9fca8", 00:10:33.209 "strip_size_kb": 64, 00:10:33.209 "state": "online", 00:10:33.209 "raid_level": "concat", 00:10:33.209 "superblock": true, 00:10:33.209 "num_base_bdevs": 2, 00:10:33.209 "num_base_bdevs_discovered": 2, 00:10:33.209 "num_base_bdevs_operational": 2, 00:10:33.209 "base_bdevs_list": [ 00:10:33.209 { 00:10:33.209 "name": "BaseBdev1", 00:10:33.209 "uuid": "1183b353-4017-4dd2-ace3-2f8a5dd76539", 00:10:33.209 "is_configured": true, 00:10:33.209 "data_offset": 2048, 00:10:33.209 "data_size": 63488 00:10:33.209 }, 00:10:33.209 { 00:10:33.209 "name": "BaseBdev2", 00:10:33.210 "uuid": "61333768-6fb6-47fe-8f8c-07928e562799", 00:10:33.210 "is_configured": true, 00:10:33.210 "data_offset": 2048, 00:10:33.210 "data_size": 63488 00:10:33.210 } 00:10:33.210 ] 00:10:33.210 } 00:10:33.210 } 00:10:33.210 }' 00:10:33.210 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.210 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:33.210 BaseBdev2' 00:10:33.210 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:33.210 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:33.210 10:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:33.468 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:33.468 "name": "BaseBdev1", 00:10:33.468 "aliases": [ 00:10:33.468 "1183b353-4017-4dd2-ace3-2f8a5dd76539" 00:10:33.468 ], 00:10:33.468 "product_name": "Malloc disk", 00:10:33.468 "block_size": 512, 00:10:33.468 "num_blocks": 65536, 00:10:33.468 "uuid": "1183b353-4017-4dd2-ace3-2f8a5dd76539", 00:10:33.468 "assigned_rate_limits": { 00:10:33.468 "rw_ios_per_sec": 0, 00:10:33.468 "rw_mbytes_per_sec": 0, 00:10:33.468 "r_mbytes_per_sec": 0, 00:10:33.468 "w_mbytes_per_sec": 0 00:10:33.468 }, 00:10:33.468 "claimed": true, 00:10:33.468 "claim_type": "exclusive_write", 00:10:33.468 "zoned": false, 00:10:33.468 "supported_io_types": { 00:10:33.468 "read": true, 00:10:33.468 "write": true, 00:10:33.468 "unmap": true, 00:10:33.468 "flush": true, 00:10:33.468 "reset": true, 00:10:33.468 "nvme_admin": false, 00:10:33.468 "nvme_io": false, 00:10:33.468 "nvme_io_md": false, 00:10:33.468 "write_zeroes": true, 00:10:33.468 "zcopy": true, 00:10:33.468 "get_zone_info": false, 00:10:33.468 "zone_management": false, 00:10:33.468 "zone_append": false, 00:10:33.468 "compare": false, 00:10:33.468 "compare_and_write": false, 00:10:33.468 "abort": true, 00:10:33.468 "seek_hole": false, 00:10:33.468 "seek_data": false, 00:10:33.468 "copy": true, 00:10:33.468 "nvme_iov_md": false 00:10:33.468 }, 00:10:33.468 "memory_domains": [ 00:10:33.468 { 00:10:33.468 "dma_device_id": "system", 00:10:33.468 "dma_device_type": 1 00:10:33.468 }, 00:10:33.468 { 00:10:33.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.468 "dma_device_type": 2 00:10:33.468 } 00:10:33.468 ], 00:10:33.468 "driver_specific": {} 00:10:33.468 }' 00:10:33.468 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:33.468 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:33.468 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:33.468 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:33.468 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:33.726 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:33.726 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:33.726 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:33.726 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:33.726 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:33.726 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:33.726 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:33.726 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:33.726 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:33.726 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:33.984 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:33.984 "name": "BaseBdev2", 00:10:33.984 "aliases": [ 00:10:33.984 "61333768-6fb6-47fe-8f8c-07928e562799" 00:10:33.984 ], 00:10:33.984 "product_name": "Malloc disk", 00:10:33.984 "block_size": 512, 00:10:33.984 "num_blocks": 65536, 00:10:33.984 "uuid": "61333768-6fb6-47fe-8f8c-07928e562799", 00:10:33.984 "assigned_rate_limits": { 00:10:33.984 "rw_ios_per_sec": 0, 00:10:33.984 "rw_mbytes_per_sec": 0, 00:10:33.984 "r_mbytes_per_sec": 0, 00:10:33.984 "w_mbytes_per_sec": 0 00:10:33.984 }, 00:10:33.984 "claimed": true, 00:10:33.984 "claim_type": "exclusive_write", 00:10:33.984 "zoned": false, 00:10:33.984 "supported_io_types": { 00:10:33.984 "read": true, 00:10:33.984 "write": true, 00:10:33.984 "unmap": true, 00:10:33.984 "flush": true, 00:10:33.984 "reset": true, 00:10:33.984 "nvme_admin": false, 00:10:33.984 "nvme_io": false, 00:10:33.984 "nvme_io_md": false, 00:10:33.984 "write_zeroes": true, 00:10:33.984 "zcopy": true, 00:10:33.984 "get_zone_info": false, 00:10:33.984 "zone_management": false, 00:10:33.984 "zone_append": false, 00:10:33.984 "compare": false, 00:10:33.984 "compare_and_write": false, 00:10:33.984 "abort": true, 00:10:33.984 "seek_hole": false, 00:10:33.984 "seek_data": false, 00:10:33.984 "copy": true, 00:10:33.984 "nvme_iov_md": false 00:10:33.984 }, 00:10:33.984 "memory_domains": [ 00:10:33.984 { 00:10:33.984 "dma_device_id": "system", 00:10:33.984 "dma_device_type": 1 00:10:33.984 }, 00:10:33.984 { 00:10:33.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.984 "dma_device_type": 2 00:10:33.984 } 00:10:33.984 ], 00:10:33.984 "driver_specific": {} 00:10:33.984 }' 00:10:33.984 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:33.984 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:33.984 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:33.984 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:33.984 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:34.242 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:34.242 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:34.242 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:34.242 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:34.242 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:34.242 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:34.242 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:34.242 10:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:34.500 [2024-07-25 10:25:38.046229] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.500 [2024-07-25 10:25:38.046257] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.500 [2024-07-25 10:25:38.046303] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.500 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.758 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:34.758 "name": "Existed_Raid", 00:10:34.758 "uuid": "2d29c6db-3e96-410d-a567-2fd310b9fca8", 00:10:34.758 "strip_size_kb": 64, 00:10:34.758 "state": "offline", 00:10:34.758 "raid_level": "concat", 00:10:34.758 "superblock": true, 00:10:34.758 "num_base_bdevs": 2, 00:10:34.758 "num_base_bdevs_discovered": 1, 00:10:34.758 "num_base_bdevs_operational": 1, 00:10:34.758 "base_bdevs_list": [ 00:10:34.758 { 00:10:34.758 "name": null, 00:10:34.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.758 "is_configured": false, 00:10:34.758 "data_offset": 2048, 00:10:34.758 "data_size": 63488 00:10:34.758 }, 00:10:34.758 { 00:10:34.758 "name": "BaseBdev2", 00:10:34.758 "uuid": "61333768-6fb6-47fe-8f8c-07928e562799", 00:10:34.758 "is_configured": true, 00:10:34.758 "data_offset": 2048, 00:10:34.758 "data_size": 63488 00:10:34.758 } 00:10:34.758 ] 00:10:34.758 }' 00:10:34.759 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:34.759 10:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.330 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:35.330 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:35.330 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.330 10:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:35.654 10:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:35.654 10:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.654 10:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:35.654 [2024-07-25 10:25:39.304204] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.654 [2024-07-25 10:25:39.304265] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x919b40 name Existed_Raid, state offline 00:10:35.654 10:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:35.654 10:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:35.654 10:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.654 10:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:35.912 10:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:35.912 10:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:35.912 10:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:35.912 10:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2339604 00:10:35.912 10:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2339604 ']' 00:10:35.912 10:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2339604 00:10:35.912 10:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:35.912 10:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.912 10:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2339604 00:10:35.912 10:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:35.912 10:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:35.912 10:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2339604' 00:10:35.912 killing process with pid 2339604 00:10:36.170 10:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2339604 00:10:36.170 [2024-07-25 10:25:39.621089] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.170 10:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2339604 00:10:36.170 [2024-07-25 10:25:39.622217] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.429 10:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:36.429 00:10:36.429 real 0m10.234s 00:10:36.429 user 0m18.429s 00:10:36.429 sys 0m1.499s 00:10:36.429 10:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.429 10:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.429 ************************************ 00:10:36.429 END TEST raid_state_function_test_sb 00:10:36.429 ************************************ 00:10:36.429 10:25:39 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:36.429 10:25:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:36.429 10:25:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.429 10:25:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.429 ************************************ 00:10:36.429 START TEST raid_superblock_test 00:10:36.429 ************************************ 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2341035 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2341035 /var/tmp/spdk-raid.sock 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2341035 ']' 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:36.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.429 10:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.429 [2024-07-25 10:25:40.001420] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:10:36.429 [2024-07-25 10:25:40.001544] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2341035 ] 00:10:36.429 [2024-07-25 10:25:40.088677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.687 [2024-07-25 10:25:40.208177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.687 [2024-07-25 10:25:40.280441] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.687 [2024-07-25 10:25:40.280489] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.253 10:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.253 10:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:37.253 10:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:37.253 10:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:37.253 10:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:37.253 10:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:37.253 10:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:37.253 10:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.253 10:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.253 10:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.253 10:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:37.820 malloc1 00:10:37.820 10:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:37.820 [2024-07-25 10:25:41.475286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:37.820 [2024-07-25 10:25:41.475367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.820 [2024-07-25 10:25:41.475395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23622b0 00:10:37.820 [2024-07-25 10:25:41.475425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.820 [2024-07-25 10:25:41.477081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.820 [2024-07-25 10:25:41.477129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:37.820 pt1 00:10:37.820 10:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:37.820 10:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:37.820 10:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:37.820 10:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:37.820 10:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:37.820 10:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.820 10:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.820 10:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.820 10:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:38.079 malloc2 00:10:38.337 10:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:38.337 [2024-07-25 10:25:42.020965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:38.337 [2024-07-25 10:25:42.021040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.337 [2024-07-25 10:25:42.021062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25151e0 00:10:38.337 [2024-07-25 10:25:42.021075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.337 [2024-07-25 10:25:42.022699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.337 [2024-07-25 10:25:42.022721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:38.337 pt2 00:10:38.337 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:38.337 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:38.337 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:10:38.904 [2024-07-25 10:25:42.309820] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:38.904 [2024-07-25 10:25:42.311311] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.904 [2024-07-25 10:25:42.311501] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x24f9df0 00:10:38.904 [2024-07-25 10:25:42.311519] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:38.904 [2024-07-25 10:25:42.311777] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24fa9b0 00:10:38.904 [2024-07-25 10:25:42.311967] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24f9df0 00:10:38.904 [2024-07-25 10:25:42.311983] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24f9df0 00:10:38.904 [2024-07-25 10:25:42.312133] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.904 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:38.904 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:38.904 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:38.904 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:38.904 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:38.904 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:38.904 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:38.904 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:38.904 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:38.904 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:38.904 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.904 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.904 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:38.904 "name": "raid_bdev1", 00:10:38.904 "uuid": "b58a6560-f2f9-4677-83e9-950002eebd41", 00:10:38.904 "strip_size_kb": 64, 00:10:38.904 "state": "online", 00:10:38.904 "raid_level": "concat", 00:10:38.904 "superblock": true, 00:10:38.904 "num_base_bdevs": 2, 00:10:38.904 "num_base_bdevs_discovered": 2, 00:10:38.904 "num_base_bdevs_operational": 2, 00:10:38.904 "base_bdevs_list": [ 00:10:38.904 { 00:10:38.904 "name": "pt1", 00:10:38.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.904 "is_configured": true, 00:10:38.905 "data_offset": 2048, 00:10:38.905 "data_size": 63488 00:10:38.905 }, 00:10:38.905 { 00:10:38.905 "name": "pt2", 00:10:38.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.905 "is_configured": true, 00:10:38.905 "data_offset": 2048, 00:10:38.905 "data_size": 63488 00:10:38.905 } 00:10:38.905 ] 00:10:38.905 }' 00:10:38.905 10:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:38.905 10:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.471 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:39.471 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:39.471 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:39.471 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:39.471 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:39.471 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:39.471 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:39.471 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:39.729 [2024-07-25 10:25:43.352779] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.729 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:39.729 "name": "raid_bdev1", 00:10:39.729 "aliases": [ 00:10:39.729 "b58a6560-f2f9-4677-83e9-950002eebd41" 00:10:39.729 ], 00:10:39.729 "product_name": "Raid Volume", 00:10:39.729 "block_size": 512, 00:10:39.729 "num_blocks": 126976, 00:10:39.729 "uuid": "b58a6560-f2f9-4677-83e9-950002eebd41", 00:10:39.729 "assigned_rate_limits": { 00:10:39.729 "rw_ios_per_sec": 0, 00:10:39.729 "rw_mbytes_per_sec": 0, 00:10:39.729 "r_mbytes_per_sec": 0, 00:10:39.729 "w_mbytes_per_sec": 0 00:10:39.729 }, 00:10:39.729 "claimed": false, 00:10:39.729 "zoned": false, 00:10:39.729 "supported_io_types": { 00:10:39.729 "read": true, 00:10:39.729 "write": true, 00:10:39.729 "unmap": true, 00:10:39.729 "flush": true, 00:10:39.729 "reset": true, 00:10:39.729 "nvme_admin": false, 00:10:39.729 "nvme_io": false, 00:10:39.729 "nvme_io_md": false, 00:10:39.729 "write_zeroes": true, 00:10:39.729 "zcopy": false, 00:10:39.729 "get_zone_info": false, 00:10:39.729 "zone_management": false, 00:10:39.729 "zone_append": false, 00:10:39.729 "compare": false, 00:10:39.729 "compare_and_write": false, 00:10:39.729 "abort": false, 00:10:39.729 "seek_hole": false, 00:10:39.729 "seek_data": false, 00:10:39.729 "copy": false, 00:10:39.729 "nvme_iov_md": false 00:10:39.729 }, 00:10:39.729 "memory_domains": [ 00:10:39.729 { 00:10:39.729 "dma_device_id": "system", 00:10:39.729 "dma_device_type": 1 00:10:39.729 }, 00:10:39.729 { 00:10:39.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.729 "dma_device_type": 2 00:10:39.729 }, 00:10:39.729 { 00:10:39.729 "dma_device_id": "system", 00:10:39.729 "dma_device_type": 1 00:10:39.729 }, 00:10:39.729 { 00:10:39.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.729 "dma_device_type": 2 00:10:39.729 } 00:10:39.729 ], 00:10:39.729 "driver_specific": { 00:10:39.729 "raid": { 00:10:39.729 "uuid": "b58a6560-f2f9-4677-83e9-950002eebd41", 00:10:39.729 "strip_size_kb": 64, 00:10:39.729 "state": "online", 00:10:39.729 "raid_level": "concat", 00:10:39.729 "superblock": true, 00:10:39.729 "num_base_bdevs": 2, 00:10:39.729 "num_base_bdevs_discovered": 2, 00:10:39.729 "num_base_bdevs_operational": 2, 00:10:39.729 "base_bdevs_list": [ 00:10:39.729 { 00:10:39.729 "name": "pt1", 00:10:39.729 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.729 "is_configured": true, 00:10:39.729 "data_offset": 2048, 00:10:39.729 "data_size": 63488 00:10:39.729 }, 00:10:39.729 { 00:10:39.729 "name": "pt2", 00:10:39.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.729 "is_configured": true, 00:10:39.729 "data_offset": 2048, 00:10:39.729 "data_size": 63488 00:10:39.729 } 00:10:39.729 ] 00:10:39.729 } 00:10:39.729 } 00:10:39.729 }' 00:10:39.729 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.729 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:39.729 pt2' 00:10:39.729 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:39.729 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:39.729 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:39.987 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:39.987 "name": "pt1", 00:10:39.987 "aliases": [ 00:10:39.987 "00000000-0000-0000-0000-000000000001" 00:10:39.987 ], 00:10:39.987 "product_name": "passthru", 00:10:39.987 "block_size": 512, 00:10:39.987 "num_blocks": 65536, 00:10:39.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.987 "assigned_rate_limits": { 00:10:39.987 "rw_ios_per_sec": 0, 00:10:39.987 "rw_mbytes_per_sec": 0, 00:10:39.987 "r_mbytes_per_sec": 0, 00:10:39.987 "w_mbytes_per_sec": 0 00:10:39.987 }, 00:10:39.987 "claimed": true, 00:10:39.987 "claim_type": "exclusive_write", 00:10:39.987 "zoned": false, 00:10:39.987 "supported_io_types": { 00:10:39.987 "read": true, 00:10:39.987 "write": true, 00:10:39.987 "unmap": true, 00:10:39.987 "flush": true, 00:10:39.987 "reset": true, 00:10:39.987 "nvme_admin": false, 00:10:39.987 "nvme_io": false, 00:10:39.987 "nvme_io_md": false, 00:10:39.987 "write_zeroes": true, 00:10:39.987 "zcopy": true, 00:10:39.987 "get_zone_info": false, 00:10:39.987 "zone_management": false, 00:10:39.987 "zone_append": false, 00:10:39.987 "compare": false, 00:10:39.987 "compare_and_write": false, 00:10:39.987 "abort": true, 00:10:39.987 "seek_hole": false, 00:10:39.987 "seek_data": false, 00:10:39.987 "copy": true, 00:10:39.987 "nvme_iov_md": false 00:10:39.987 }, 00:10:39.987 "memory_domains": [ 00:10:39.987 { 00:10:39.987 "dma_device_id": "system", 00:10:39.987 "dma_device_type": 1 00:10:39.987 }, 00:10:39.987 { 00:10:39.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.987 "dma_device_type": 2 00:10:39.987 } 00:10:39.987 ], 00:10:39.987 "driver_specific": { 00:10:39.987 "passthru": { 00:10:39.987 "name": "pt1", 00:10:39.987 "base_bdev_name": "malloc1" 00:10:39.987 } 00:10:39.987 } 00:10:39.987 }' 00:10:39.987 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:39.987 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:40.245 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:40.245 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:40.245 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:40.245 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:40.245 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:40.245 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:40.245 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:40.245 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:40.245 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:40.245 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:40.245 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:40.245 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:40.245 10:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:40.503 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:40.503 "name": "pt2", 00:10:40.503 "aliases": [ 00:10:40.503 "00000000-0000-0000-0000-000000000002" 00:10:40.503 ], 00:10:40.503 "product_name": "passthru", 00:10:40.503 "block_size": 512, 00:10:40.503 "num_blocks": 65536, 00:10:40.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.503 "assigned_rate_limits": { 00:10:40.503 "rw_ios_per_sec": 0, 00:10:40.503 "rw_mbytes_per_sec": 0, 00:10:40.503 "r_mbytes_per_sec": 0, 00:10:40.503 "w_mbytes_per_sec": 0 00:10:40.503 }, 00:10:40.503 "claimed": true, 00:10:40.503 "claim_type": "exclusive_write", 00:10:40.503 "zoned": false, 00:10:40.503 "supported_io_types": { 00:10:40.503 "read": true, 00:10:40.503 "write": true, 00:10:40.503 "unmap": true, 00:10:40.503 "flush": true, 00:10:40.503 "reset": true, 00:10:40.503 "nvme_admin": false, 00:10:40.503 "nvme_io": false, 00:10:40.503 "nvme_io_md": false, 00:10:40.503 "write_zeroes": true, 00:10:40.503 "zcopy": true, 00:10:40.503 "get_zone_info": false, 00:10:40.503 "zone_management": false, 00:10:40.503 "zone_append": false, 00:10:40.503 "compare": false, 00:10:40.503 "compare_and_write": false, 00:10:40.503 "abort": true, 00:10:40.503 "seek_hole": false, 00:10:40.503 "seek_data": false, 00:10:40.503 "copy": true, 00:10:40.503 "nvme_iov_md": false 00:10:40.503 }, 00:10:40.503 "memory_domains": [ 00:10:40.503 { 00:10:40.503 "dma_device_id": "system", 00:10:40.503 "dma_device_type": 1 00:10:40.503 }, 00:10:40.503 { 00:10:40.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.503 "dma_device_type": 2 00:10:40.503 } 00:10:40.503 ], 00:10:40.503 "driver_specific": { 00:10:40.503 "passthru": { 00:10:40.503 "name": "pt2", 00:10:40.503 "base_bdev_name": "malloc2" 00:10:40.503 } 00:10:40.503 } 00:10:40.503 }' 00:10:40.503 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:40.761 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:40.761 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:40.761 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:40.761 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:40.761 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:40.761 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:40.761 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:40.761 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:40.761 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:40.761 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:41.019 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:41.019 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:41.019 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:41.019 [2024-07-25 10:25:44.716633] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.277 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b58a6560-f2f9-4677-83e9-950002eebd41 00:10:41.277 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z b58a6560-f2f9-4677-83e9-950002eebd41 ']' 00:10:41.277 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:41.277 [2024-07-25 10:25:44.957011] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.277 [2024-07-25 10:25:44.957043] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.277 [2024-07-25 10:25:44.957140] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.277 [2024-07-25 10:25:44.957201] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.277 [2024-07-25 10:25:44.957216] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24f9df0 name raid_bdev1, state offline 00:10:41.277 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.277 10:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:41.536 10:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:41.536 10:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:41.536 10:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:41.536 10:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:41.794 10:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:41.794 10:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:42.052 10:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:42.052 10:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:42.310 10:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:42.310 10:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:42.310 10:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:42.310 10:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:42.310 10:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:42.310 10:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.310 10:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:42.310 10:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.310 10:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:42.310 10:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.310 10:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:42.310 10:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:42.310 10:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:42.569 [2024-07-25 10:25:46.192294] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:42.569 [2024-07-25 10:25:46.193634] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:42.569 [2024-07-25 10:25:46.193705] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:42.569 [2024-07-25 10:25:46.193776] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:42.569 [2024-07-25 10:25:46.193799] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.569 [2024-07-25 10:25:46.193808] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2360d90 name raid_bdev1, state configuring 00:10:42.569 request: 00:10:42.569 { 00:10:42.569 "name": "raid_bdev1", 00:10:42.569 "raid_level": "concat", 00:10:42.569 "base_bdevs": [ 00:10:42.569 "malloc1", 00:10:42.569 "malloc2" 00:10:42.569 ], 00:10:42.569 "strip_size_kb": 64, 00:10:42.569 "superblock": false, 00:10:42.569 "method": "bdev_raid_create", 00:10:42.569 "req_id": 1 00:10:42.569 } 00:10:42.569 Got JSON-RPC error response 00:10:42.569 response: 00:10:42.569 { 00:10:42.569 "code": -17, 00:10:42.569 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:42.569 } 00:10:42.569 10:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:42.569 10:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:42.569 10:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:42.569 10:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:42.569 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.569 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:42.827 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:42.827 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:42.827 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:43.085 [2024-07-25 10:25:46.685507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:43.085 [2024-07-25 10:25:46.685559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.085 [2024-07-25 10:25:46.685582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24fa7b0 00:10:43.085 [2024-07-25 10:25:46.685597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.085 [2024-07-25 10:25:46.687209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.085 [2024-07-25 10:25:46.687236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:43.085 [2024-07-25 10:25:46.687311] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:43.085 [2024-07-25 10:25:46.687347] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:43.085 pt1 00:10:43.085 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:43.085 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:43.085 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:43.085 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:43.085 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:43.085 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:43.085 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:43.085 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:43.085 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:43.085 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:43.085 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.085 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.343 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:43.343 "name": "raid_bdev1", 00:10:43.343 "uuid": "b58a6560-f2f9-4677-83e9-950002eebd41", 00:10:43.343 "strip_size_kb": 64, 00:10:43.343 "state": "configuring", 00:10:43.343 "raid_level": "concat", 00:10:43.343 "superblock": true, 00:10:43.343 "num_base_bdevs": 2, 00:10:43.343 "num_base_bdevs_discovered": 1, 00:10:43.343 "num_base_bdevs_operational": 2, 00:10:43.343 "base_bdevs_list": [ 00:10:43.343 { 00:10:43.343 "name": "pt1", 00:10:43.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.343 "is_configured": true, 00:10:43.343 "data_offset": 2048, 00:10:43.343 "data_size": 63488 00:10:43.343 }, 00:10:43.343 { 00:10:43.343 "name": null, 00:10:43.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.343 "is_configured": false, 00:10:43.343 "data_offset": 2048, 00:10:43.343 "data_size": 63488 00:10:43.343 } 00:10:43.343 ] 00:10:43.343 }' 00:10:43.343 10:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:43.343 10:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.909 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:10:43.909 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:43.909 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:43.909 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.167 [2024-07-25 10:25:47.688201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.167 [2024-07-25 10:25:47.688284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.167 [2024-07-25 10:25:47.688308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23616c0 00:10:44.167 [2024-07-25 10:25:47.688321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.167 [2024-07-25 10:25:47.688744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.167 [2024-07-25 10:25:47.688765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.167 [2024-07-25 10:25:47.688850] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:44.167 [2024-07-25 10:25:47.688874] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.167 [2024-07-25 10:25:47.688984] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x23610d0 00:10:44.167 [2024-07-25 10:25:47.688997] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:44.167 [2024-07-25 10:25:47.689164] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24fb810 00:10:44.168 [2024-07-25 10:25:47.689297] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23610d0 00:10:44.168 [2024-07-25 10:25:47.689310] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23610d0 00:10:44.168 [2024-07-25 10:25:47.689425] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.168 pt2 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.168 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.425 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:44.425 "name": "raid_bdev1", 00:10:44.425 "uuid": "b58a6560-f2f9-4677-83e9-950002eebd41", 00:10:44.425 "strip_size_kb": 64, 00:10:44.425 "state": "online", 00:10:44.425 "raid_level": "concat", 00:10:44.426 "superblock": true, 00:10:44.426 "num_base_bdevs": 2, 00:10:44.426 "num_base_bdevs_discovered": 2, 00:10:44.426 "num_base_bdevs_operational": 2, 00:10:44.426 "base_bdevs_list": [ 00:10:44.426 { 00:10:44.426 "name": "pt1", 00:10:44.426 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.426 "is_configured": true, 00:10:44.426 "data_offset": 2048, 00:10:44.426 "data_size": 63488 00:10:44.426 }, 00:10:44.426 { 00:10:44.426 "name": "pt2", 00:10:44.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.426 "is_configured": true, 00:10:44.426 "data_offset": 2048, 00:10:44.426 "data_size": 63488 00:10:44.426 } 00:10:44.426 ] 00:10:44.426 }' 00:10:44.426 10:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:44.426 10:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.992 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:44.992 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:44.992 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:44.992 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:44.992 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:44.992 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:44.992 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:44.992 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:45.250 [2024-07-25 10:25:48.723154] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.250 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:45.250 "name": "raid_bdev1", 00:10:45.250 "aliases": [ 00:10:45.250 "b58a6560-f2f9-4677-83e9-950002eebd41" 00:10:45.250 ], 00:10:45.250 "product_name": "Raid Volume", 00:10:45.250 "block_size": 512, 00:10:45.250 "num_blocks": 126976, 00:10:45.250 "uuid": "b58a6560-f2f9-4677-83e9-950002eebd41", 00:10:45.250 "assigned_rate_limits": { 00:10:45.250 "rw_ios_per_sec": 0, 00:10:45.250 "rw_mbytes_per_sec": 0, 00:10:45.250 "r_mbytes_per_sec": 0, 00:10:45.250 "w_mbytes_per_sec": 0 00:10:45.250 }, 00:10:45.250 "claimed": false, 00:10:45.250 "zoned": false, 00:10:45.250 "supported_io_types": { 00:10:45.250 "read": true, 00:10:45.250 "write": true, 00:10:45.250 "unmap": true, 00:10:45.250 "flush": true, 00:10:45.250 "reset": true, 00:10:45.250 "nvme_admin": false, 00:10:45.250 "nvme_io": false, 00:10:45.250 "nvme_io_md": false, 00:10:45.250 "write_zeroes": true, 00:10:45.250 "zcopy": false, 00:10:45.250 "get_zone_info": false, 00:10:45.250 "zone_management": false, 00:10:45.250 "zone_append": false, 00:10:45.250 "compare": false, 00:10:45.250 "compare_and_write": false, 00:10:45.250 "abort": false, 00:10:45.250 "seek_hole": false, 00:10:45.250 "seek_data": false, 00:10:45.250 "copy": false, 00:10:45.250 "nvme_iov_md": false 00:10:45.250 }, 00:10:45.250 "memory_domains": [ 00:10:45.250 { 00:10:45.250 "dma_device_id": "system", 00:10:45.250 "dma_device_type": 1 00:10:45.250 }, 00:10:45.250 { 00:10:45.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.250 "dma_device_type": 2 00:10:45.250 }, 00:10:45.250 { 00:10:45.250 "dma_device_id": "system", 00:10:45.250 "dma_device_type": 1 00:10:45.250 }, 00:10:45.250 { 00:10:45.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.250 "dma_device_type": 2 00:10:45.250 } 00:10:45.250 ], 00:10:45.250 "driver_specific": { 00:10:45.250 "raid": { 00:10:45.250 "uuid": "b58a6560-f2f9-4677-83e9-950002eebd41", 00:10:45.250 "strip_size_kb": 64, 00:10:45.251 "state": "online", 00:10:45.251 "raid_level": "concat", 00:10:45.251 "superblock": true, 00:10:45.251 "num_base_bdevs": 2, 00:10:45.251 "num_base_bdevs_discovered": 2, 00:10:45.251 "num_base_bdevs_operational": 2, 00:10:45.251 "base_bdevs_list": [ 00:10:45.251 { 00:10:45.251 "name": "pt1", 00:10:45.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.251 "is_configured": true, 00:10:45.251 "data_offset": 2048, 00:10:45.251 "data_size": 63488 00:10:45.251 }, 00:10:45.251 { 00:10:45.251 "name": "pt2", 00:10:45.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.251 "is_configured": true, 00:10:45.251 "data_offset": 2048, 00:10:45.251 "data_size": 63488 00:10:45.251 } 00:10:45.251 ] 00:10:45.251 } 00:10:45.251 } 00:10:45.251 }' 00:10:45.251 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.251 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:45.251 pt2' 00:10:45.251 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:45.251 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:45.251 10:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:45.509 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:45.509 "name": "pt1", 00:10:45.509 "aliases": [ 00:10:45.509 "00000000-0000-0000-0000-000000000001" 00:10:45.509 ], 00:10:45.509 "product_name": "passthru", 00:10:45.509 "block_size": 512, 00:10:45.509 "num_blocks": 65536, 00:10:45.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.509 "assigned_rate_limits": { 00:10:45.509 "rw_ios_per_sec": 0, 00:10:45.509 "rw_mbytes_per_sec": 0, 00:10:45.509 "r_mbytes_per_sec": 0, 00:10:45.509 "w_mbytes_per_sec": 0 00:10:45.509 }, 00:10:45.509 "claimed": true, 00:10:45.509 "claim_type": "exclusive_write", 00:10:45.509 "zoned": false, 00:10:45.509 "supported_io_types": { 00:10:45.509 "read": true, 00:10:45.509 "write": true, 00:10:45.509 "unmap": true, 00:10:45.509 "flush": true, 00:10:45.509 "reset": true, 00:10:45.509 "nvme_admin": false, 00:10:45.509 "nvme_io": false, 00:10:45.509 "nvme_io_md": false, 00:10:45.509 "write_zeroes": true, 00:10:45.509 "zcopy": true, 00:10:45.509 "get_zone_info": false, 00:10:45.509 "zone_management": false, 00:10:45.509 "zone_append": false, 00:10:45.509 "compare": false, 00:10:45.509 "compare_and_write": false, 00:10:45.509 "abort": true, 00:10:45.509 "seek_hole": false, 00:10:45.509 "seek_data": false, 00:10:45.509 "copy": true, 00:10:45.509 "nvme_iov_md": false 00:10:45.509 }, 00:10:45.509 "memory_domains": [ 00:10:45.509 { 00:10:45.509 "dma_device_id": "system", 00:10:45.509 "dma_device_type": 1 00:10:45.509 }, 00:10:45.509 { 00:10:45.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.509 "dma_device_type": 2 00:10:45.509 } 00:10:45.509 ], 00:10:45.509 "driver_specific": { 00:10:45.509 "passthru": { 00:10:45.509 "name": "pt1", 00:10:45.509 "base_bdev_name": "malloc1" 00:10:45.509 } 00:10:45.509 } 00:10:45.509 }' 00:10:45.509 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:45.509 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:45.509 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:45.509 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:45.509 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:45.509 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:45.509 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:45.509 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:45.767 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:45.767 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:45.767 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:45.767 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:45.767 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:45.767 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:45.767 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:46.025 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:46.025 "name": "pt2", 00:10:46.025 "aliases": [ 00:10:46.025 "00000000-0000-0000-0000-000000000002" 00:10:46.025 ], 00:10:46.025 "product_name": "passthru", 00:10:46.025 "block_size": 512, 00:10:46.025 "num_blocks": 65536, 00:10:46.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.025 "assigned_rate_limits": { 00:10:46.025 "rw_ios_per_sec": 0, 00:10:46.025 "rw_mbytes_per_sec": 0, 00:10:46.025 "r_mbytes_per_sec": 0, 00:10:46.025 "w_mbytes_per_sec": 0 00:10:46.025 }, 00:10:46.025 "claimed": true, 00:10:46.025 "claim_type": "exclusive_write", 00:10:46.025 "zoned": false, 00:10:46.025 "supported_io_types": { 00:10:46.025 "read": true, 00:10:46.025 "write": true, 00:10:46.025 "unmap": true, 00:10:46.025 "flush": true, 00:10:46.025 "reset": true, 00:10:46.025 "nvme_admin": false, 00:10:46.025 "nvme_io": false, 00:10:46.025 "nvme_io_md": false, 00:10:46.025 "write_zeroes": true, 00:10:46.025 "zcopy": true, 00:10:46.025 "get_zone_info": false, 00:10:46.025 "zone_management": false, 00:10:46.025 "zone_append": false, 00:10:46.025 "compare": false, 00:10:46.025 "compare_and_write": false, 00:10:46.025 "abort": true, 00:10:46.025 "seek_hole": false, 00:10:46.025 "seek_data": false, 00:10:46.025 "copy": true, 00:10:46.025 "nvme_iov_md": false 00:10:46.025 }, 00:10:46.025 "memory_domains": [ 00:10:46.025 { 00:10:46.025 "dma_device_id": "system", 00:10:46.025 "dma_device_type": 1 00:10:46.025 }, 00:10:46.025 { 00:10:46.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.025 "dma_device_type": 2 00:10:46.025 } 00:10:46.025 ], 00:10:46.025 "driver_specific": { 00:10:46.025 "passthru": { 00:10:46.025 "name": "pt2", 00:10:46.025 "base_bdev_name": "malloc2" 00:10:46.025 } 00:10:46.025 } 00:10:46.025 }' 00:10:46.025 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:46.025 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:46.025 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:46.025 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:46.025 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:46.025 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:46.025 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:46.283 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:46.283 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:46.283 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:46.283 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:46.283 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:46.283 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:46.283 10:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:46.541 [2024-07-25 10:25:50.070756] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' b58a6560-f2f9-4677-83e9-950002eebd41 '!=' b58a6560-f2f9-4677-83e9-950002eebd41 ']' 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2341035 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2341035 ']' 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2341035 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2341035 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2341035' 00:10:46.541 killing process with pid 2341035 00:10:46.541 10:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2341035 00:10:46.542 [2024-07-25 10:25:50.123356] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.542 10:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2341035 00:10:46.542 [2024-07-25 10:25:50.123448] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.542 [2024-07-25 10:25:50.123511] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.542 [2024-07-25 10:25:50.123526] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23610d0 name raid_bdev1, state offline 00:10:46.542 [2024-07-25 10:25:50.146394] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.801 10:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:46.801 00:10:46.801 real 0m10.481s 00:10:46.801 user 0m18.926s 00:10:46.801 sys 0m1.545s 00:10:46.801 10:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.801 10:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.801 ************************************ 00:10:46.801 END TEST raid_superblock_test 00:10:46.801 ************************************ 00:10:46.801 10:25:50 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:46.801 10:25:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:46.801 10:25:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.801 10:25:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.801 ************************************ 00:10:46.801 START TEST raid_read_error_test 00:10:46.801 ************************************ 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.kpforRphHb 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2342457 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2342457 /var/tmp/spdk-raid.sock 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2342457 ']' 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:46.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:46.801 10:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:46.802 10:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.061 [2024-07-25 10:25:50.543260] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:10:47.061 [2024-07-25 10:25:50.543329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2342457 ] 00:10:47.061 [2024-07-25 10:25:50.625530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.061 [2024-07-25 10:25:50.746987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.319 [2024-07-25 10:25:50.819381] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.319 [2024-07-25 10:25:50.819426] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.886 10:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:47.886 10:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:47.886 10:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:47.886 10:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:48.145 BaseBdev1_malloc 00:10:48.145 10:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:48.404 true 00:10:48.404 10:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:48.662 [2024-07-25 10:25:52.199350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:48.662 [2024-07-25 10:25:52.199401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.662 [2024-07-25 10:25:52.199423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12a1250 00:10:48.662 [2024-07-25 10:25:52.199438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.662 [2024-07-25 10:25:52.201024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.662 [2024-07-25 10:25:52.201053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:48.662 BaseBdev1 00:10:48.662 10:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:48.662 10:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:48.919 BaseBdev2_malloc 00:10:48.919 10:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:49.178 true 00:10:49.178 10:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:49.436 [2024-07-25 10:25:52.952863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:49.436 [2024-07-25 10:25:52.952916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.436 [2024-07-25 10:25:52.952940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1290650 00:10:49.436 [2024-07-25 10:25:52.952955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.436 [2024-07-25 10:25:52.954481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.436 [2024-07-25 10:25:52.954509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:49.436 BaseBdev2 00:10:49.436 10:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:49.694 [2024-07-25 10:25:53.205568] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.695 [2024-07-25 10:25:53.206746] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.695 [2024-07-25 10:25:53.206975] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x12877d0 00:10:49.695 [2024-07-25 10:25:53.206994] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:49.695 [2024-07-25 10:25:53.207178] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10e4530 00:10:49.695 [2024-07-25 10:25:53.207358] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12877d0 00:10:49.695 [2024-07-25 10:25:53.207374] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12877d0 00:10:49.695 [2024-07-25 10:25:53.207494] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.695 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:49.695 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:49.695 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:49.695 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:49.695 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:49.695 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:49.695 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:49.695 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:49.695 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:49.695 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:49.695 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.695 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.953 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.953 "name": "raid_bdev1", 00:10:49.953 "uuid": "d50ceda6-8687-466e-90d8-6f20782c2427", 00:10:49.953 "strip_size_kb": 64, 00:10:49.953 "state": "online", 00:10:49.953 "raid_level": "concat", 00:10:49.953 "superblock": true, 00:10:49.953 "num_base_bdevs": 2, 00:10:49.953 "num_base_bdevs_discovered": 2, 00:10:49.953 "num_base_bdevs_operational": 2, 00:10:49.953 "base_bdevs_list": [ 00:10:49.953 { 00:10:49.953 "name": "BaseBdev1", 00:10:49.953 "uuid": "ee8af624-8aba-5c37-a746-01a44eda000f", 00:10:49.953 "is_configured": true, 00:10:49.953 "data_offset": 2048, 00:10:49.953 "data_size": 63488 00:10:49.953 }, 00:10:49.953 { 00:10:49.953 "name": "BaseBdev2", 00:10:49.953 "uuid": "6bd43e14-f3fc-5e91-99aa-4f98ed47a666", 00:10:49.953 "is_configured": true, 00:10:49.953 "data_offset": 2048, 00:10:49.953 "data_size": 63488 00:10:49.953 } 00:10:49.953 ] 00:10:49.953 }' 00:10:49.953 10:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.953 10:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.520 10:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:50.520 10:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:50.521 [2024-07-25 10:25:54.152652] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10e4f10 00:10:51.456 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.715 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.973 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:51.973 "name": "raid_bdev1", 00:10:51.973 "uuid": "d50ceda6-8687-466e-90d8-6f20782c2427", 00:10:51.973 "strip_size_kb": 64, 00:10:51.973 "state": "online", 00:10:51.973 "raid_level": "concat", 00:10:51.973 "superblock": true, 00:10:51.973 "num_base_bdevs": 2, 00:10:51.973 "num_base_bdevs_discovered": 2, 00:10:51.973 "num_base_bdevs_operational": 2, 00:10:51.973 "base_bdevs_list": [ 00:10:51.973 { 00:10:51.973 "name": "BaseBdev1", 00:10:51.973 "uuid": "ee8af624-8aba-5c37-a746-01a44eda000f", 00:10:51.973 "is_configured": true, 00:10:51.973 "data_offset": 2048, 00:10:51.973 "data_size": 63488 00:10:51.973 }, 00:10:51.973 { 00:10:51.973 "name": "BaseBdev2", 00:10:51.973 "uuid": "6bd43e14-f3fc-5e91-99aa-4f98ed47a666", 00:10:51.973 "is_configured": true, 00:10:51.973 "data_offset": 2048, 00:10:51.973 "data_size": 63488 00:10:51.973 } 00:10:51.973 ] 00:10:51.973 }' 00:10:51.973 10:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:51.973 10:25:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.539 10:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:52.797 [2024-07-25 10:25:56.384405] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.797 [2024-07-25 10:25:56.384464] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.797 [2024-07-25 10:25:56.387470] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.798 [2024-07-25 10:25:56.387509] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.798 [2024-07-25 10:25:56.387542] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.798 [2024-07-25 10:25:56.387556] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12877d0 name raid_bdev1, state offline 00:10:52.798 0 00:10:52.798 10:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2342457 00:10:52.798 10:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2342457 ']' 00:10:52.798 10:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2342457 00:10:52.798 10:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:52.798 10:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:52.798 10:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2342457 00:10:52.798 10:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:52.798 10:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:52.798 10:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2342457' 00:10:52.798 killing process with pid 2342457 00:10:52.798 10:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2342457 00:10:52.798 [2024-07-25 10:25:56.432236] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.798 10:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2342457 00:10:52.798 [2024-07-25 10:25:56.446895] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.056 10:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.kpforRphHb 00:10:53.056 10:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:53.056 10:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:53.056 10:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:10:53.056 10:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:53.056 10:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:53.056 10:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:53.056 10:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:10:53.056 00:10:53.056 real 0m6.260s 00:10:53.056 user 0m9.908s 00:10:53.056 sys 0m0.882s 00:10:53.056 10:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.056 10:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.056 ************************************ 00:10:53.056 END TEST raid_read_error_test 00:10:53.056 ************************************ 00:10:53.342 10:25:56 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:53.342 10:25:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:53.342 10:25:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.342 10:25:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.342 ************************************ 00:10:53.342 START TEST raid_write_error_test 00:10:53.342 ************************************ 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.M8BqyLmg8W 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2343349 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:53.342 10:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2343349 /var/tmp/spdk-raid.sock 00:10:53.343 10:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2343349 ']' 00:10:53.343 10:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:53.343 10:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.343 10:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:53.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:53.343 10:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.343 10:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.343 [2024-07-25 10:25:56.853750] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:10:53.343 [2024-07-25 10:25:56.853828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343349 ] 00:10:53.343 [2024-07-25 10:25:56.936739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.601 [2024-07-25 10:25:57.059313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.601 [2024-07-25 10:25:57.132343] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.601 [2024-07-25 10:25:57.132388] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.167 10:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.167 10:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:54.167 10:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:54.167 10:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:54.425 BaseBdev1_malloc 00:10:54.425 10:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:54.683 true 00:10:54.683 10:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:54.941 [2024-07-25 10:25:58.499057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:54.941 [2024-07-25 10:25:58.499130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.941 [2024-07-25 10:25:58.499155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e3f250 00:10:54.941 [2024-07-25 10:25:58.499172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.941 [2024-07-25 10:25:58.500815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.941 [2024-07-25 10:25:58.500845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:54.941 BaseBdev1 00:10:54.941 10:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:54.941 10:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:55.199 BaseBdev2_malloc 00:10:55.199 10:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:55.457 true 00:10:55.457 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:55.716 [2024-07-25 10:25:59.240305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:55.716 [2024-07-25 10:25:59.240357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.716 [2024-07-25 10:25:59.240381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e2e650 00:10:55.716 [2024-07-25 10:25:59.240396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.716 [2024-07-25 10:25:59.241939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.716 [2024-07-25 10:25:59.241967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:55.716 BaseBdev2 00:10:55.716 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:55.974 [2024-07-25 10:25:59.485041] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.974 [2024-07-25 10:25:59.486418] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.974 [2024-07-25 10:25:59.486642] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e257d0 00:10:55.974 [2024-07-25 10:25:59.486661] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:55.974 [2024-07-25 10:25:59.486881] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c82530 00:10:55.974 [2024-07-25 10:25:59.487075] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e257d0 00:10:55.974 [2024-07-25 10:25:59.487090] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e257d0 00:10:55.974 [2024-07-25 10:25:59.487240] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.974 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:55.974 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:55.974 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:55.974 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:55.974 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:55.974 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:55.974 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:55.974 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:55.974 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:55.974 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:55.974 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.974 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.232 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:56.232 "name": "raid_bdev1", 00:10:56.232 "uuid": "24b268d8-535b-4cc5-8c94-c96a7699a233", 00:10:56.232 "strip_size_kb": 64, 00:10:56.232 "state": "online", 00:10:56.232 "raid_level": "concat", 00:10:56.232 "superblock": true, 00:10:56.232 "num_base_bdevs": 2, 00:10:56.232 "num_base_bdevs_discovered": 2, 00:10:56.232 "num_base_bdevs_operational": 2, 00:10:56.232 "base_bdevs_list": [ 00:10:56.232 { 00:10:56.232 "name": "BaseBdev1", 00:10:56.232 "uuid": "65bb0aab-943b-50fd-ae3a-6adce22ad136", 00:10:56.232 "is_configured": true, 00:10:56.232 "data_offset": 2048, 00:10:56.232 "data_size": 63488 00:10:56.232 }, 00:10:56.232 { 00:10:56.232 "name": "BaseBdev2", 00:10:56.232 "uuid": "12a24ef3-ccad-5995-b965-350941914630", 00:10:56.232 "is_configured": true, 00:10:56.232 "data_offset": 2048, 00:10:56.232 "data_size": 63488 00:10:56.232 } 00:10:56.232 ] 00:10:56.232 }' 00:10:56.232 10:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:56.232 10:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.795 10:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:56.795 10:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:56.795 [2024-07-25 10:26:00.439938] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c82f10 00:10:57.727 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.984 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.242 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:58.242 "name": "raid_bdev1", 00:10:58.242 "uuid": "24b268d8-535b-4cc5-8c94-c96a7699a233", 00:10:58.242 "strip_size_kb": 64, 00:10:58.242 "state": "online", 00:10:58.242 "raid_level": "concat", 00:10:58.242 "superblock": true, 00:10:58.242 "num_base_bdevs": 2, 00:10:58.242 "num_base_bdevs_discovered": 2, 00:10:58.242 "num_base_bdevs_operational": 2, 00:10:58.242 "base_bdevs_list": [ 00:10:58.242 { 00:10:58.242 "name": "BaseBdev1", 00:10:58.242 "uuid": "65bb0aab-943b-50fd-ae3a-6adce22ad136", 00:10:58.242 "is_configured": true, 00:10:58.242 "data_offset": 2048, 00:10:58.242 "data_size": 63488 00:10:58.242 }, 00:10:58.242 { 00:10:58.242 "name": "BaseBdev2", 00:10:58.242 "uuid": "12a24ef3-ccad-5995-b965-350941914630", 00:10:58.242 "is_configured": true, 00:10:58.242 "data_offset": 2048, 00:10:58.242 "data_size": 63488 00:10:58.242 } 00:10:58.242 ] 00:10:58.242 }' 00:10:58.242 10:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:58.242 10:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.807 10:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:59.064 [2024-07-25 10:26:02.614272] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.064 [2024-07-25 10:26:02.614330] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.064 [2024-07-25 10:26:02.617330] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.064 [2024-07-25 10:26:02.617368] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.064 [2024-07-25 10:26:02.617402] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.064 [2024-07-25 10:26:02.617415] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e257d0 name raid_bdev1, state offline 00:10:59.064 0 00:10:59.064 10:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2343349 00:10:59.064 10:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2343349 ']' 00:10:59.064 10:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2343349 00:10:59.064 10:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:59.064 10:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.064 10:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2343349 00:10:59.064 10:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:59.064 10:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:59.064 10:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2343349' 00:10:59.064 killing process with pid 2343349 00:10:59.064 10:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2343349 00:10:59.064 [2024-07-25 10:26:02.666361] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.064 10:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2343349 00:10:59.064 [2024-07-25 10:26:02.681690] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.322 10:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.M8BqyLmg8W 00:10:59.323 10:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:59.323 10:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:59.323 10:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:10:59.323 10:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:59.323 10:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:59.323 10:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:59.323 10:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:10:59.323 00:10:59.323 real 0m6.189s 00:10:59.323 user 0m9.763s 00:10:59.323 sys 0m0.889s 00:10:59.323 10:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.323 10:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.323 ************************************ 00:10:59.323 END TEST raid_write_error_test 00:10:59.323 ************************************ 00:10:59.323 10:26:03 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:59.323 10:26:03 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:59.323 10:26:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:59.323 10:26:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.323 10:26:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:59.323 ************************************ 00:10:59.323 START TEST raid_state_function_test 00:10:59.323 ************************************ 00:10:59.323 10:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:10:59.323 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:10:59.323 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:59.323 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2344215 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2344215' 00:10:59.581 Process raid pid: 2344215 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2344215 /var/tmp/spdk-raid.sock 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2344215 ']' 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:59.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.581 10:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.581 [2024-07-25 10:26:03.086412] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:10:59.581 [2024-07-25 10:26:03.086491] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.581 [2024-07-25 10:26:03.170232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.839 [2024-07-25 10:26:03.292475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.839 [2024-07-25 10:26:03.360025] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.839 [2024-07-25 10:26:03.360068] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.403 10:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:00.403 10:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:00.403 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:00.660 [2024-07-25 10:26:04.258873] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.660 [2024-07-25 10:26:04.258920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.660 [2024-07-25 10:26:04.258932] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.660 [2024-07-25 10:26:04.258946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.660 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:00.660 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:00.660 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:00.661 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:00.661 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:00.661 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:00.661 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:00.661 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:00.661 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:00.661 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:00.661 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.661 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.918 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:00.918 "name": "Existed_Raid", 00:11:00.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.918 "strip_size_kb": 0, 00:11:00.918 "state": "configuring", 00:11:00.918 "raid_level": "raid1", 00:11:00.918 "superblock": false, 00:11:00.918 "num_base_bdevs": 2, 00:11:00.918 "num_base_bdevs_discovered": 0, 00:11:00.918 "num_base_bdevs_operational": 2, 00:11:00.918 "base_bdevs_list": [ 00:11:00.918 { 00:11:00.918 "name": "BaseBdev1", 00:11:00.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.918 "is_configured": false, 00:11:00.918 "data_offset": 0, 00:11:00.918 "data_size": 0 00:11:00.918 }, 00:11:00.918 { 00:11:00.918 "name": "BaseBdev2", 00:11:00.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.918 "is_configured": false, 00:11:00.918 "data_offset": 0, 00:11:00.918 "data_size": 0 00:11:00.918 } 00:11:00.918 ] 00:11:00.918 }' 00:11:00.918 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:00.918 10:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.483 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:01.741 [2024-07-25 10:26:05.305499] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.741 [2024-07-25 10:26:05.305537] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1838600 name Existed_Raid, state configuring 00:11:01.741 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:01.999 [2024-07-25 10:26:05.542134] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.999 [2024-07-25 10:26:05.542170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.999 [2024-07-25 10:26:05.542182] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.999 [2024-07-25 10:26:05.542195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.999 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:02.257 [2024-07-25 10:26:05.802902] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.257 BaseBdev1 00:11:02.257 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:02.257 10:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:02.257 10:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:02.257 10:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:02.257 10:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:02.257 10:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:02.257 10:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:02.515 10:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:02.773 [ 00:11:02.773 { 00:11:02.773 "name": "BaseBdev1", 00:11:02.773 "aliases": [ 00:11:02.773 "8bcc42d5-087e-4f38-b9ce-b4bcf479d8dc" 00:11:02.773 ], 00:11:02.773 "product_name": "Malloc disk", 00:11:02.773 "block_size": 512, 00:11:02.773 "num_blocks": 65536, 00:11:02.773 "uuid": "8bcc42d5-087e-4f38-b9ce-b4bcf479d8dc", 00:11:02.773 "assigned_rate_limits": { 00:11:02.773 "rw_ios_per_sec": 0, 00:11:02.773 "rw_mbytes_per_sec": 0, 00:11:02.773 "r_mbytes_per_sec": 0, 00:11:02.773 "w_mbytes_per_sec": 0 00:11:02.773 }, 00:11:02.773 "claimed": true, 00:11:02.773 "claim_type": "exclusive_write", 00:11:02.773 "zoned": false, 00:11:02.773 "supported_io_types": { 00:11:02.773 "read": true, 00:11:02.773 "write": true, 00:11:02.773 "unmap": true, 00:11:02.773 "flush": true, 00:11:02.773 "reset": true, 00:11:02.773 "nvme_admin": false, 00:11:02.773 "nvme_io": false, 00:11:02.773 "nvme_io_md": false, 00:11:02.773 "write_zeroes": true, 00:11:02.773 "zcopy": true, 00:11:02.774 "get_zone_info": false, 00:11:02.774 "zone_management": false, 00:11:02.774 "zone_append": false, 00:11:02.774 "compare": false, 00:11:02.774 "compare_and_write": false, 00:11:02.774 "abort": true, 00:11:02.774 "seek_hole": false, 00:11:02.774 "seek_data": false, 00:11:02.774 "copy": true, 00:11:02.774 "nvme_iov_md": false 00:11:02.774 }, 00:11:02.774 "memory_domains": [ 00:11:02.774 { 00:11:02.774 "dma_device_id": "system", 00:11:02.774 "dma_device_type": 1 00:11:02.774 }, 00:11:02.774 { 00:11:02.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.774 "dma_device_type": 2 00:11:02.774 } 00:11:02.774 ], 00:11:02.774 "driver_specific": {} 00:11:02.774 } 00:11:02.774 ] 00:11:02.774 10:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:02.774 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:02.774 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:02.774 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:02.774 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:02.774 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:02.774 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:02.774 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:02.774 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:02.774 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:02.774 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:02.774 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:02.774 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.032 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:03.032 "name": "Existed_Raid", 00:11:03.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.032 "strip_size_kb": 0, 00:11:03.032 "state": "configuring", 00:11:03.032 "raid_level": "raid1", 00:11:03.032 "superblock": false, 00:11:03.032 "num_base_bdevs": 2, 00:11:03.032 "num_base_bdevs_discovered": 1, 00:11:03.032 "num_base_bdevs_operational": 2, 00:11:03.032 "base_bdevs_list": [ 00:11:03.032 { 00:11:03.032 "name": "BaseBdev1", 00:11:03.032 "uuid": "8bcc42d5-087e-4f38-b9ce-b4bcf479d8dc", 00:11:03.032 "is_configured": true, 00:11:03.032 "data_offset": 0, 00:11:03.032 "data_size": 65536 00:11:03.032 }, 00:11:03.032 { 00:11:03.032 "name": "BaseBdev2", 00:11:03.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.032 "is_configured": false, 00:11:03.032 "data_offset": 0, 00:11:03.032 "data_size": 0 00:11:03.032 } 00:11:03.032 ] 00:11:03.032 }' 00:11:03.032 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:03.032 10:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.598 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:03.856 [2024-07-25 10:26:07.314920] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.856 [2024-07-25 10:26:07.314967] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1837e50 name Existed_Raid, state configuring 00:11:03.856 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:04.114 [2024-07-25 10:26:07.575642] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.114 [2024-07-25 10:26:07.577192] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.114 [2024-07-25 10:26:07.577228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.114 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.372 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:04.372 "name": "Existed_Raid", 00:11:04.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.372 "strip_size_kb": 0, 00:11:04.372 "state": "configuring", 00:11:04.372 "raid_level": "raid1", 00:11:04.372 "superblock": false, 00:11:04.372 "num_base_bdevs": 2, 00:11:04.372 "num_base_bdevs_discovered": 1, 00:11:04.372 "num_base_bdevs_operational": 2, 00:11:04.372 "base_bdevs_list": [ 00:11:04.372 { 00:11:04.372 "name": "BaseBdev1", 00:11:04.372 "uuid": "8bcc42d5-087e-4f38-b9ce-b4bcf479d8dc", 00:11:04.372 "is_configured": true, 00:11:04.372 "data_offset": 0, 00:11:04.372 "data_size": 65536 00:11:04.372 }, 00:11:04.372 { 00:11:04.372 "name": "BaseBdev2", 00:11:04.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.372 "is_configured": false, 00:11:04.372 "data_offset": 0, 00:11:04.372 "data_size": 0 00:11:04.372 } 00:11:04.372 ] 00:11:04.372 }' 00:11:04.372 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:04.372 10:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.938 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:04.938 [2024-07-25 10:26:08.624035] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.938 [2024-07-25 10:26:08.624092] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1838b40 00:11:04.938 [2024-07-25 10:26:08.624115] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:04.938 [2024-07-25 10:26:08.624318] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1830380 00:11:04.938 [2024-07-25 10:26:08.624470] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1838b40 00:11:04.938 [2024-07-25 10:26:08.624487] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1838b40 00:11:04.938 [2024-07-25 10:26:08.624687] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.938 BaseBdev2 00:11:04.938 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:04.938 10:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:04.938 10:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:04.938 10:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:04.938 10:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:04.938 10:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:04.938 10:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:05.196 10:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.455 [ 00:11:05.455 { 00:11:05.455 "name": "BaseBdev2", 00:11:05.455 "aliases": [ 00:11:05.455 "138303ba-350f-48bd-9fd4-b022685f1f44" 00:11:05.455 ], 00:11:05.455 "product_name": "Malloc disk", 00:11:05.455 "block_size": 512, 00:11:05.455 "num_blocks": 65536, 00:11:05.455 "uuid": "138303ba-350f-48bd-9fd4-b022685f1f44", 00:11:05.455 "assigned_rate_limits": { 00:11:05.455 "rw_ios_per_sec": 0, 00:11:05.455 "rw_mbytes_per_sec": 0, 00:11:05.455 "r_mbytes_per_sec": 0, 00:11:05.455 "w_mbytes_per_sec": 0 00:11:05.455 }, 00:11:05.455 "claimed": true, 00:11:05.455 "claim_type": "exclusive_write", 00:11:05.455 "zoned": false, 00:11:05.455 "supported_io_types": { 00:11:05.455 "read": true, 00:11:05.455 "write": true, 00:11:05.455 "unmap": true, 00:11:05.455 "flush": true, 00:11:05.455 "reset": true, 00:11:05.455 "nvme_admin": false, 00:11:05.455 "nvme_io": false, 00:11:05.455 "nvme_io_md": false, 00:11:05.455 "write_zeroes": true, 00:11:05.455 "zcopy": true, 00:11:05.455 "get_zone_info": false, 00:11:05.455 "zone_management": false, 00:11:05.455 "zone_append": false, 00:11:05.455 "compare": false, 00:11:05.455 "compare_and_write": false, 00:11:05.455 "abort": true, 00:11:05.455 "seek_hole": false, 00:11:05.455 "seek_data": false, 00:11:05.455 "copy": true, 00:11:05.455 "nvme_iov_md": false 00:11:05.455 }, 00:11:05.455 "memory_domains": [ 00:11:05.455 { 00:11:05.455 "dma_device_id": "system", 00:11:05.455 "dma_device_type": 1 00:11:05.455 }, 00:11:05.455 { 00:11:05.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.455 "dma_device_type": 2 00:11:05.455 } 00:11:05.455 ], 00:11:05.455 "driver_specific": {} 00:11:05.455 } 00:11:05.455 ] 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.455 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.713 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:05.713 "name": "Existed_Raid", 00:11:05.713 "uuid": "384de1c7-08b3-40ef-8ef8-5edb6f5d18f8", 00:11:05.713 "strip_size_kb": 0, 00:11:05.713 "state": "online", 00:11:05.713 "raid_level": "raid1", 00:11:05.713 "superblock": false, 00:11:05.713 "num_base_bdevs": 2, 00:11:05.713 "num_base_bdevs_discovered": 2, 00:11:05.713 "num_base_bdevs_operational": 2, 00:11:05.713 "base_bdevs_list": [ 00:11:05.713 { 00:11:05.713 "name": "BaseBdev1", 00:11:05.713 "uuid": "8bcc42d5-087e-4f38-b9ce-b4bcf479d8dc", 00:11:05.713 "is_configured": true, 00:11:05.713 "data_offset": 0, 00:11:05.713 "data_size": 65536 00:11:05.713 }, 00:11:05.713 { 00:11:05.713 "name": "BaseBdev2", 00:11:05.713 "uuid": "138303ba-350f-48bd-9fd4-b022685f1f44", 00:11:05.713 "is_configured": true, 00:11:05.713 "data_offset": 0, 00:11:05.713 "data_size": 65536 00:11:05.713 } 00:11:05.713 ] 00:11:05.713 }' 00:11:05.713 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:05.714 10:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.279 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:06.279 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:06.279 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:06.279 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:06.279 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:06.279 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:06.279 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:06.279 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:06.537 [2024-07-25 10:26:10.180451] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.537 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:06.537 "name": "Existed_Raid", 00:11:06.537 "aliases": [ 00:11:06.537 "384de1c7-08b3-40ef-8ef8-5edb6f5d18f8" 00:11:06.537 ], 00:11:06.537 "product_name": "Raid Volume", 00:11:06.537 "block_size": 512, 00:11:06.537 "num_blocks": 65536, 00:11:06.537 "uuid": "384de1c7-08b3-40ef-8ef8-5edb6f5d18f8", 00:11:06.537 "assigned_rate_limits": { 00:11:06.537 "rw_ios_per_sec": 0, 00:11:06.537 "rw_mbytes_per_sec": 0, 00:11:06.537 "r_mbytes_per_sec": 0, 00:11:06.537 "w_mbytes_per_sec": 0 00:11:06.537 }, 00:11:06.537 "claimed": false, 00:11:06.537 "zoned": false, 00:11:06.537 "supported_io_types": { 00:11:06.537 "read": true, 00:11:06.537 "write": true, 00:11:06.537 "unmap": false, 00:11:06.537 "flush": false, 00:11:06.537 "reset": true, 00:11:06.537 "nvme_admin": false, 00:11:06.537 "nvme_io": false, 00:11:06.537 "nvme_io_md": false, 00:11:06.537 "write_zeroes": true, 00:11:06.537 "zcopy": false, 00:11:06.537 "get_zone_info": false, 00:11:06.537 "zone_management": false, 00:11:06.537 "zone_append": false, 00:11:06.537 "compare": false, 00:11:06.537 "compare_and_write": false, 00:11:06.537 "abort": false, 00:11:06.537 "seek_hole": false, 00:11:06.537 "seek_data": false, 00:11:06.537 "copy": false, 00:11:06.537 "nvme_iov_md": false 00:11:06.537 }, 00:11:06.537 "memory_domains": [ 00:11:06.537 { 00:11:06.537 "dma_device_id": "system", 00:11:06.537 "dma_device_type": 1 00:11:06.537 }, 00:11:06.537 { 00:11:06.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.537 "dma_device_type": 2 00:11:06.537 }, 00:11:06.537 { 00:11:06.537 "dma_device_id": "system", 00:11:06.537 "dma_device_type": 1 00:11:06.537 }, 00:11:06.537 { 00:11:06.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.537 "dma_device_type": 2 00:11:06.537 } 00:11:06.537 ], 00:11:06.537 "driver_specific": { 00:11:06.537 "raid": { 00:11:06.537 "uuid": "384de1c7-08b3-40ef-8ef8-5edb6f5d18f8", 00:11:06.537 "strip_size_kb": 0, 00:11:06.537 "state": "online", 00:11:06.537 "raid_level": "raid1", 00:11:06.537 "superblock": false, 00:11:06.537 "num_base_bdevs": 2, 00:11:06.537 "num_base_bdevs_discovered": 2, 00:11:06.537 "num_base_bdevs_operational": 2, 00:11:06.537 "base_bdevs_list": [ 00:11:06.537 { 00:11:06.537 "name": "BaseBdev1", 00:11:06.537 "uuid": "8bcc42d5-087e-4f38-b9ce-b4bcf479d8dc", 00:11:06.537 "is_configured": true, 00:11:06.537 "data_offset": 0, 00:11:06.537 "data_size": 65536 00:11:06.537 }, 00:11:06.537 { 00:11:06.537 "name": "BaseBdev2", 00:11:06.537 "uuid": "138303ba-350f-48bd-9fd4-b022685f1f44", 00:11:06.537 "is_configured": true, 00:11:06.537 "data_offset": 0, 00:11:06.537 "data_size": 65536 00:11:06.537 } 00:11:06.537 ] 00:11:06.537 } 00:11:06.537 } 00:11:06.537 }' 00:11:06.537 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.537 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:06.537 BaseBdev2' 00:11:06.538 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:06.538 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:06.538 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:06.796 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:06.796 "name": "BaseBdev1", 00:11:06.796 "aliases": [ 00:11:06.796 "8bcc42d5-087e-4f38-b9ce-b4bcf479d8dc" 00:11:06.796 ], 00:11:06.796 "product_name": "Malloc disk", 00:11:06.796 "block_size": 512, 00:11:06.796 "num_blocks": 65536, 00:11:06.796 "uuid": "8bcc42d5-087e-4f38-b9ce-b4bcf479d8dc", 00:11:06.796 "assigned_rate_limits": { 00:11:06.796 "rw_ios_per_sec": 0, 00:11:06.796 "rw_mbytes_per_sec": 0, 00:11:06.796 "r_mbytes_per_sec": 0, 00:11:06.796 "w_mbytes_per_sec": 0 00:11:06.796 }, 00:11:06.796 "claimed": true, 00:11:06.796 "claim_type": "exclusive_write", 00:11:06.796 "zoned": false, 00:11:06.796 "supported_io_types": { 00:11:06.796 "read": true, 00:11:06.796 "write": true, 00:11:06.796 "unmap": true, 00:11:06.796 "flush": true, 00:11:06.796 "reset": true, 00:11:06.796 "nvme_admin": false, 00:11:06.796 "nvme_io": false, 00:11:06.796 "nvme_io_md": false, 00:11:06.796 "write_zeroes": true, 00:11:06.796 "zcopy": true, 00:11:06.796 "get_zone_info": false, 00:11:06.796 "zone_management": false, 00:11:06.796 "zone_append": false, 00:11:06.796 "compare": false, 00:11:06.796 "compare_and_write": false, 00:11:06.796 "abort": true, 00:11:06.796 "seek_hole": false, 00:11:06.796 "seek_data": false, 00:11:06.796 "copy": true, 00:11:06.796 "nvme_iov_md": false 00:11:06.796 }, 00:11:06.796 "memory_domains": [ 00:11:06.796 { 00:11:06.796 "dma_device_id": "system", 00:11:06.796 "dma_device_type": 1 00:11:06.796 }, 00:11:06.796 { 00:11:06.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.796 "dma_device_type": 2 00:11:06.796 } 00:11:06.796 ], 00:11:06.796 "driver_specific": {} 00:11:06.796 }' 00:11:06.796 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:07.054 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:07.054 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:07.054 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:07.054 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:07.054 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:07.054 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:07.054 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:07.054 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:07.054 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:07.054 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:07.312 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:07.312 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:07.312 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:07.312 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:07.312 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:07.312 "name": "BaseBdev2", 00:11:07.312 "aliases": [ 00:11:07.312 "138303ba-350f-48bd-9fd4-b022685f1f44" 00:11:07.312 ], 00:11:07.312 "product_name": "Malloc disk", 00:11:07.312 "block_size": 512, 00:11:07.312 "num_blocks": 65536, 00:11:07.312 "uuid": "138303ba-350f-48bd-9fd4-b022685f1f44", 00:11:07.312 "assigned_rate_limits": { 00:11:07.312 "rw_ios_per_sec": 0, 00:11:07.312 "rw_mbytes_per_sec": 0, 00:11:07.312 "r_mbytes_per_sec": 0, 00:11:07.312 "w_mbytes_per_sec": 0 00:11:07.312 }, 00:11:07.312 "claimed": true, 00:11:07.312 "claim_type": "exclusive_write", 00:11:07.312 "zoned": false, 00:11:07.312 "supported_io_types": { 00:11:07.312 "read": true, 00:11:07.312 "write": true, 00:11:07.312 "unmap": true, 00:11:07.312 "flush": true, 00:11:07.312 "reset": true, 00:11:07.312 "nvme_admin": false, 00:11:07.312 "nvme_io": false, 00:11:07.312 "nvme_io_md": false, 00:11:07.312 "write_zeroes": true, 00:11:07.312 "zcopy": true, 00:11:07.312 "get_zone_info": false, 00:11:07.312 "zone_management": false, 00:11:07.312 "zone_append": false, 00:11:07.312 "compare": false, 00:11:07.312 "compare_and_write": false, 00:11:07.312 "abort": true, 00:11:07.312 "seek_hole": false, 00:11:07.312 "seek_data": false, 00:11:07.312 "copy": true, 00:11:07.312 "nvme_iov_md": false 00:11:07.312 }, 00:11:07.312 "memory_domains": [ 00:11:07.312 { 00:11:07.312 "dma_device_id": "system", 00:11:07.312 "dma_device_type": 1 00:11:07.312 }, 00:11:07.312 { 00:11:07.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.312 "dma_device_type": 2 00:11:07.312 } 00:11:07.312 ], 00:11:07.312 "driver_specific": {} 00:11:07.312 }' 00:11:07.312 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:07.570 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:07.570 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:07.570 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:07.570 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:07.570 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:07.570 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:07.570 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:07.570 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:07.570 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:07.570 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:07.829 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:07.829 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:07.829 [2024-07-25 10:26:11.523828] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:08.087 "name": "Existed_Raid", 00:11:08.087 "uuid": "384de1c7-08b3-40ef-8ef8-5edb6f5d18f8", 00:11:08.087 "strip_size_kb": 0, 00:11:08.087 "state": "online", 00:11:08.087 "raid_level": "raid1", 00:11:08.087 "superblock": false, 00:11:08.087 "num_base_bdevs": 2, 00:11:08.087 "num_base_bdevs_discovered": 1, 00:11:08.087 "num_base_bdevs_operational": 1, 00:11:08.087 "base_bdevs_list": [ 00:11:08.087 { 00:11:08.087 "name": null, 00:11:08.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.087 "is_configured": false, 00:11:08.087 "data_offset": 0, 00:11:08.087 "data_size": 65536 00:11:08.087 }, 00:11:08.087 { 00:11:08.087 "name": "BaseBdev2", 00:11:08.087 "uuid": "138303ba-350f-48bd-9fd4-b022685f1f44", 00:11:08.087 "is_configured": true, 00:11:08.087 "data_offset": 0, 00:11:08.087 "data_size": 65536 00:11:08.087 } 00:11:08.087 ] 00:11:08.087 }' 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:08.087 10:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.653 10:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:08.653 10:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:08.653 10:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.653 10:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:08.911 10:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:08.911 10:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.911 10:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:09.169 [2024-07-25 10:26:12.825930] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:09.169 [2024-07-25 10:26:12.826037] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.169 [2024-07-25 10:26:12.840069] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.169 [2024-07-25 10:26:12.840143] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.169 [2024-07-25 10:26:12.840158] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1838b40 name Existed_Raid, state offline 00:11:09.169 10:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:09.169 10:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:09.169 10:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.169 10:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:09.427 10:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:09.427 10:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:09.427 10:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:09.427 10:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2344215 00:11:09.427 10:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2344215 ']' 00:11:09.427 10:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2344215 00:11:09.427 10:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:09.427 10:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.428 10:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2344215 00:11:09.686 10:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.686 10:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.686 10:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2344215' 00:11:09.686 killing process with pid 2344215 00:11:09.686 10:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2344215 00:11:09.686 [2024-07-25 10:26:13.156847] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.686 10:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2344215 00:11:09.686 [2024-07-25 10:26:13.157944] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:09.943 00:11:09.943 real 0m10.417s 00:11:09.943 user 0m18.796s 00:11:09.943 sys 0m1.483s 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.943 ************************************ 00:11:09.943 END TEST raid_state_function_test 00:11:09.943 ************************************ 00:11:09.943 10:26:13 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:11:09.943 10:26:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:09.943 10:26:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.943 10:26:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.943 ************************************ 00:11:09.943 START TEST raid_state_function_test_sb 00:11:09.943 ************************************ 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2345665 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2345665' 00:11:09.943 Process raid pid: 2345665 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2345665 /var/tmp/spdk-raid.sock 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2345665 ']' 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:09.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.943 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.943 [2024-07-25 10:26:13.560032] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:11:09.943 [2024-07-25 10:26:13.560122] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.943 [2024-07-25 10:26:13.643971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.201 [2024-07-25 10:26:13.769483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.201 [2024-07-25 10:26:13.849998] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.201 [2024-07-25 10:26:13.850036] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:11.168 [2024-07-25 10:26:14.716330] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.168 [2024-07-25 10:26:14.716384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.168 [2024-07-25 10:26:14.716397] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.168 [2024-07-25 10:26:14.716411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.168 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.426 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:11.426 "name": "Existed_Raid", 00:11:11.426 "uuid": "3cfb90cf-9569-4e54-a5bc-eee558e78159", 00:11:11.426 "strip_size_kb": 0, 00:11:11.426 "state": "configuring", 00:11:11.426 "raid_level": "raid1", 00:11:11.426 "superblock": true, 00:11:11.426 "num_base_bdevs": 2, 00:11:11.426 "num_base_bdevs_discovered": 0, 00:11:11.426 "num_base_bdevs_operational": 2, 00:11:11.426 "base_bdevs_list": [ 00:11:11.426 { 00:11:11.426 "name": "BaseBdev1", 00:11:11.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.426 "is_configured": false, 00:11:11.426 "data_offset": 0, 00:11:11.426 "data_size": 0 00:11:11.426 }, 00:11:11.426 { 00:11:11.426 "name": "BaseBdev2", 00:11:11.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.426 "is_configured": false, 00:11:11.426 "data_offset": 0, 00:11:11.426 "data_size": 0 00:11:11.426 } 00:11:11.426 ] 00:11:11.426 }' 00:11:11.426 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:11.426 10:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.992 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:12.250 [2024-07-25 10:26:15.766971] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.250 [2024-07-25 10:26:15.767012] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x148e600 name Existed_Raid, state configuring 00:11:12.250 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:12.508 [2024-07-25 10:26:16.063781] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:12.508 [2024-07-25 10:26:16.063826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:12.508 [2024-07-25 10:26:16.063839] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.508 [2024-07-25 10:26:16.063852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.508 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:12.776 [2024-07-25 10:26:16.312766] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.776 BaseBdev1 00:11:12.776 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:12.776 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:12.776 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:12.776 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:12.776 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:12.776 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:12.776 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:13.034 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.292 [ 00:11:13.292 { 00:11:13.292 "name": "BaseBdev1", 00:11:13.292 "aliases": [ 00:11:13.292 "98585cd3-afa3-4432-817d-52e03b9227ef" 00:11:13.292 ], 00:11:13.292 "product_name": "Malloc disk", 00:11:13.292 "block_size": 512, 00:11:13.292 "num_blocks": 65536, 00:11:13.292 "uuid": "98585cd3-afa3-4432-817d-52e03b9227ef", 00:11:13.292 "assigned_rate_limits": { 00:11:13.292 "rw_ios_per_sec": 0, 00:11:13.292 "rw_mbytes_per_sec": 0, 00:11:13.292 "r_mbytes_per_sec": 0, 00:11:13.292 "w_mbytes_per_sec": 0 00:11:13.292 }, 00:11:13.292 "claimed": true, 00:11:13.292 "claim_type": "exclusive_write", 00:11:13.292 "zoned": false, 00:11:13.292 "supported_io_types": { 00:11:13.292 "read": true, 00:11:13.292 "write": true, 00:11:13.292 "unmap": true, 00:11:13.292 "flush": true, 00:11:13.292 "reset": true, 00:11:13.292 "nvme_admin": false, 00:11:13.292 "nvme_io": false, 00:11:13.292 "nvme_io_md": false, 00:11:13.292 "write_zeroes": true, 00:11:13.292 "zcopy": true, 00:11:13.292 "get_zone_info": false, 00:11:13.292 "zone_management": false, 00:11:13.292 "zone_append": false, 00:11:13.292 "compare": false, 00:11:13.292 "compare_and_write": false, 00:11:13.292 "abort": true, 00:11:13.292 "seek_hole": false, 00:11:13.292 "seek_data": false, 00:11:13.292 "copy": true, 00:11:13.292 "nvme_iov_md": false 00:11:13.292 }, 00:11:13.292 "memory_domains": [ 00:11:13.292 { 00:11:13.292 "dma_device_id": "system", 00:11:13.292 "dma_device_type": 1 00:11:13.292 }, 00:11:13.292 { 00:11:13.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.292 "dma_device_type": 2 00:11:13.292 } 00:11:13.292 ], 00:11:13.292 "driver_specific": {} 00:11:13.292 } 00:11:13.292 ] 00:11:13.292 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:13.292 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:13.292 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:13.292 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:13.292 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:13.292 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:13.292 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:13.292 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:13.292 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:13.292 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:13.292 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:13.292 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.292 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.552 10:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:13.552 "name": "Existed_Raid", 00:11:13.552 "uuid": "554886cd-0aad-45db-93b1-fd070cf9bb8f", 00:11:13.552 "strip_size_kb": 0, 00:11:13.552 "state": "configuring", 00:11:13.552 "raid_level": "raid1", 00:11:13.552 "superblock": true, 00:11:13.552 "num_base_bdevs": 2, 00:11:13.552 "num_base_bdevs_discovered": 1, 00:11:13.552 "num_base_bdevs_operational": 2, 00:11:13.552 "base_bdevs_list": [ 00:11:13.552 { 00:11:13.552 "name": "BaseBdev1", 00:11:13.552 "uuid": "98585cd3-afa3-4432-817d-52e03b9227ef", 00:11:13.552 "is_configured": true, 00:11:13.552 "data_offset": 2048, 00:11:13.552 "data_size": 63488 00:11:13.552 }, 00:11:13.552 { 00:11:13.552 "name": "BaseBdev2", 00:11:13.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.552 "is_configured": false, 00:11:13.552 "data_offset": 0, 00:11:13.552 "data_size": 0 00:11:13.552 } 00:11:13.552 ] 00:11:13.552 }' 00:11:13.552 10:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:13.552 10:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.118 10:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:14.376 [2024-07-25 10:26:17.884902] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.376 [2024-07-25 10:26:17.884959] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x148de50 name Existed_Raid, state configuring 00:11:14.376 10:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:14.634 [2024-07-25 10:26:18.125594] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.634 [2024-07-25 10:26:18.127163] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.634 [2024-07-25 10:26:18.127201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:14.634 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.892 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:14.892 "name": "Existed_Raid", 00:11:14.892 "uuid": "37f00a20-cf95-4808-97c5-68c18bbd13e1", 00:11:14.892 "strip_size_kb": 0, 00:11:14.892 "state": "configuring", 00:11:14.892 "raid_level": "raid1", 00:11:14.892 "superblock": true, 00:11:14.892 "num_base_bdevs": 2, 00:11:14.892 "num_base_bdevs_discovered": 1, 00:11:14.892 "num_base_bdevs_operational": 2, 00:11:14.892 "base_bdevs_list": [ 00:11:14.892 { 00:11:14.892 "name": "BaseBdev1", 00:11:14.892 "uuid": "98585cd3-afa3-4432-817d-52e03b9227ef", 00:11:14.892 "is_configured": true, 00:11:14.892 "data_offset": 2048, 00:11:14.892 "data_size": 63488 00:11:14.892 }, 00:11:14.892 { 00:11:14.892 "name": "BaseBdev2", 00:11:14.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.892 "is_configured": false, 00:11:14.892 "data_offset": 0, 00:11:14.892 "data_size": 0 00:11:14.892 } 00:11:14.892 ] 00:11:14.892 }' 00:11:14.892 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:14.892 10:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.458 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.715 [2024-07-25 10:26:19.214171] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.715 [2024-07-25 10:26:19.214372] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x148eb40 00:11:15.715 [2024-07-25 10:26:19.214388] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:15.715 [2024-07-25 10:26:19.214532] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x148f8e0 00:11:15.715 [2024-07-25 10:26:19.214661] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x148eb40 00:11:15.715 [2024-07-25 10:26:19.214675] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x148eb40 00:11:15.715 [2024-07-25 10:26:19.214766] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.715 BaseBdev2 00:11:15.715 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:15.715 10:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:15.715 10:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:15.715 10:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:15.715 10:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:15.716 10:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:15.716 10:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:15.974 10:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:16.232 [ 00:11:16.232 { 00:11:16.232 "name": "BaseBdev2", 00:11:16.232 "aliases": [ 00:11:16.232 "98246a88-7d90-49e1-8aeb-5d442361f11f" 00:11:16.232 ], 00:11:16.232 "product_name": "Malloc disk", 00:11:16.232 "block_size": 512, 00:11:16.232 "num_blocks": 65536, 00:11:16.232 "uuid": "98246a88-7d90-49e1-8aeb-5d442361f11f", 00:11:16.232 "assigned_rate_limits": { 00:11:16.232 "rw_ios_per_sec": 0, 00:11:16.232 "rw_mbytes_per_sec": 0, 00:11:16.232 "r_mbytes_per_sec": 0, 00:11:16.232 "w_mbytes_per_sec": 0 00:11:16.232 }, 00:11:16.232 "claimed": true, 00:11:16.232 "claim_type": "exclusive_write", 00:11:16.232 "zoned": false, 00:11:16.232 "supported_io_types": { 00:11:16.232 "read": true, 00:11:16.232 "write": true, 00:11:16.232 "unmap": true, 00:11:16.232 "flush": true, 00:11:16.232 "reset": true, 00:11:16.232 "nvme_admin": false, 00:11:16.232 "nvme_io": false, 00:11:16.232 "nvme_io_md": false, 00:11:16.232 "write_zeroes": true, 00:11:16.232 "zcopy": true, 00:11:16.232 "get_zone_info": false, 00:11:16.232 "zone_management": false, 00:11:16.232 "zone_append": false, 00:11:16.232 "compare": false, 00:11:16.232 "compare_and_write": false, 00:11:16.232 "abort": true, 00:11:16.232 "seek_hole": false, 00:11:16.232 "seek_data": false, 00:11:16.232 "copy": true, 00:11:16.232 "nvme_iov_md": false 00:11:16.232 }, 00:11:16.232 "memory_domains": [ 00:11:16.232 { 00:11:16.232 "dma_device_id": "system", 00:11:16.232 "dma_device_type": 1 00:11:16.232 }, 00:11:16.232 { 00:11:16.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.232 "dma_device_type": 2 00:11:16.232 } 00:11:16.232 ], 00:11:16.232 "driver_specific": {} 00:11:16.232 } 00:11:16.232 ] 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.232 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.490 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:16.490 "name": "Existed_Raid", 00:11:16.490 "uuid": "37f00a20-cf95-4808-97c5-68c18bbd13e1", 00:11:16.490 "strip_size_kb": 0, 00:11:16.490 "state": "online", 00:11:16.490 "raid_level": "raid1", 00:11:16.490 "superblock": true, 00:11:16.490 "num_base_bdevs": 2, 00:11:16.490 "num_base_bdevs_discovered": 2, 00:11:16.490 "num_base_bdevs_operational": 2, 00:11:16.490 "base_bdevs_list": [ 00:11:16.490 { 00:11:16.490 "name": "BaseBdev1", 00:11:16.490 "uuid": "98585cd3-afa3-4432-817d-52e03b9227ef", 00:11:16.490 "is_configured": true, 00:11:16.490 "data_offset": 2048, 00:11:16.490 "data_size": 63488 00:11:16.490 }, 00:11:16.490 { 00:11:16.490 "name": "BaseBdev2", 00:11:16.490 "uuid": "98246a88-7d90-49e1-8aeb-5d442361f11f", 00:11:16.490 "is_configured": true, 00:11:16.490 "data_offset": 2048, 00:11:16.490 "data_size": 63488 00:11:16.490 } 00:11:16.490 ] 00:11:16.490 }' 00:11:16.490 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:16.490 10:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.056 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:17.056 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:17.056 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:17.056 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:17.056 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:17.056 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:17.056 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:17.056 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:17.056 [2024-07-25 10:26:20.750546] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.315 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:17.315 "name": "Existed_Raid", 00:11:17.315 "aliases": [ 00:11:17.315 "37f00a20-cf95-4808-97c5-68c18bbd13e1" 00:11:17.315 ], 00:11:17.315 "product_name": "Raid Volume", 00:11:17.315 "block_size": 512, 00:11:17.315 "num_blocks": 63488, 00:11:17.315 "uuid": "37f00a20-cf95-4808-97c5-68c18bbd13e1", 00:11:17.315 "assigned_rate_limits": { 00:11:17.315 "rw_ios_per_sec": 0, 00:11:17.315 "rw_mbytes_per_sec": 0, 00:11:17.315 "r_mbytes_per_sec": 0, 00:11:17.315 "w_mbytes_per_sec": 0 00:11:17.315 }, 00:11:17.315 "claimed": false, 00:11:17.315 "zoned": false, 00:11:17.315 "supported_io_types": { 00:11:17.315 "read": true, 00:11:17.315 "write": true, 00:11:17.315 "unmap": false, 00:11:17.315 "flush": false, 00:11:17.315 "reset": true, 00:11:17.315 "nvme_admin": false, 00:11:17.315 "nvme_io": false, 00:11:17.315 "nvme_io_md": false, 00:11:17.315 "write_zeroes": true, 00:11:17.315 "zcopy": false, 00:11:17.316 "get_zone_info": false, 00:11:17.316 "zone_management": false, 00:11:17.316 "zone_append": false, 00:11:17.316 "compare": false, 00:11:17.316 "compare_and_write": false, 00:11:17.316 "abort": false, 00:11:17.316 "seek_hole": false, 00:11:17.316 "seek_data": false, 00:11:17.316 "copy": false, 00:11:17.316 "nvme_iov_md": false 00:11:17.316 }, 00:11:17.316 "memory_domains": [ 00:11:17.316 { 00:11:17.316 "dma_device_id": "system", 00:11:17.316 "dma_device_type": 1 00:11:17.316 }, 00:11:17.316 { 00:11:17.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.316 "dma_device_type": 2 00:11:17.316 }, 00:11:17.316 { 00:11:17.316 "dma_device_id": "system", 00:11:17.316 "dma_device_type": 1 00:11:17.316 }, 00:11:17.316 { 00:11:17.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.316 "dma_device_type": 2 00:11:17.316 } 00:11:17.316 ], 00:11:17.316 "driver_specific": { 00:11:17.316 "raid": { 00:11:17.316 "uuid": "37f00a20-cf95-4808-97c5-68c18bbd13e1", 00:11:17.316 "strip_size_kb": 0, 00:11:17.316 "state": "online", 00:11:17.316 "raid_level": "raid1", 00:11:17.316 "superblock": true, 00:11:17.316 "num_base_bdevs": 2, 00:11:17.316 "num_base_bdevs_discovered": 2, 00:11:17.316 "num_base_bdevs_operational": 2, 00:11:17.316 "base_bdevs_list": [ 00:11:17.316 { 00:11:17.316 "name": "BaseBdev1", 00:11:17.316 "uuid": "98585cd3-afa3-4432-817d-52e03b9227ef", 00:11:17.316 "is_configured": true, 00:11:17.316 "data_offset": 2048, 00:11:17.316 "data_size": 63488 00:11:17.316 }, 00:11:17.316 { 00:11:17.316 "name": "BaseBdev2", 00:11:17.316 "uuid": "98246a88-7d90-49e1-8aeb-5d442361f11f", 00:11:17.316 "is_configured": true, 00:11:17.316 "data_offset": 2048, 00:11:17.316 "data_size": 63488 00:11:17.316 } 00:11:17.316 ] 00:11:17.316 } 00:11:17.316 } 00:11:17.316 }' 00:11:17.316 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.316 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:17.316 BaseBdev2' 00:11:17.316 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:17.316 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:17.316 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:17.574 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:17.574 "name": "BaseBdev1", 00:11:17.574 "aliases": [ 00:11:17.574 "98585cd3-afa3-4432-817d-52e03b9227ef" 00:11:17.574 ], 00:11:17.574 "product_name": "Malloc disk", 00:11:17.574 "block_size": 512, 00:11:17.574 "num_blocks": 65536, 00:11:17.574 "uuid": "98585cd3-afa3-4432-817d-52e03b9227ef", 00:11:17.574 "assigned_rate_limits": { 00:11:17.574 "rw_ios_per_sec": 0, 00:11:17.574 "rw_mbytes_per_sec": 0, 00:11:17.574 "r_mbytes_per_sec": 0, 00:11:17.574 "w_mbytes_per_sec": 0 00:11:17.574 }, 00:11:17.574 "claimed": true, 00:11:17.574 "claim_type": "exclusive_write", 00:11:17.574 "zoned": false, 00:11:17.574 "supported_io_types": { 00:11:17.574 "read": true, 00:11:17.574 "write": true, 00:11:17.574 "unmap": true, 00:11:17.574 "flush": true, 00:11:17.574 "reset": true, 00:11:17.574 "nvme_admin": false, 00:11:17.574 "nvme_io": false, 00:11:17.574 "nvme_io_md": false, 00:11:17.574 "write_zeroes": true, 00:11:17.574 "zcopy": true, 00:11:17.574 "get_zone_info": false, 00:11:17.574 "zone_management": false, 00:11:17.574 "zone_append": false, 00:11:17.574 "compare": false, 00:11:17.574 "compare_and_write": false, 00:11:17.574 "abort": true, 00:11:17.574 "seek_hole": false, 00:11:17.574 "seek_data": false, 00:11:17.574 "copy": true, 00:11:17.574 "nvme_iov_md": false 00:11:17.574 }, 00:11:17.574 "memory_domains": [ 00:11:17.574 { 00:11:17.574 "dma_device_id": "system", 00:11:17.574 "dma_device_type": 1 00:11:17.574 }, 00:11:17.574 { 00:11:17.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.574 "dma_device_type": 2 00:11:17.574 } 00:11:17.574 ], 00:11:17.574 "driver_specific": {} 00:11:17.574 }' 00:11:17.574 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:17.574 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:17.574 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:17.574 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:17.574 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:17.574 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:17.574 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:17.832 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:17.832 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:17.832 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:17.832 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:17.832 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:17.832 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:17.832 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:17.832 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:18.090 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:18.090 "name": "BaseBdev2", 00:11:18.090 "aliases": [ 00:11:18.090 "98246a88-7d90-49e1-8aeb-5d442361f11f" 00:11:18.090 ], 00:11:18.090 "product_name": "Malloc disk", 00:11:18.090 "block_size": 512, 00:11:18.090 "num_blocks": 65536, 00:11:18.090 "uuid": "98246a88-7d90-49e1-8aeb-5d442361f11f", 00:11:18.090 "assigned_rate_limits": { 00:11:18.090 "rw_ios_per_sec": 0, 00:11:18.090 "rw_mbytes_per_sec": 0, 00:11:18.090 "r_mbytes_per_sec": 0, 00:11:18.090 "w_mbytes_per_sec": 0 00:11:18.090 }, 00:11:18.090 "claimed": true, 00:11:18.090 "claim_type": "exclusive_write", 00:11:18.090 "zoned": false, 00:11:18.090 "supported_io_types": { 00:11:18.090 "read": true, 00:11:18.090 "write": true, 00:11:18.090 "unmap": true, 00:11:18.090 "flush": true, 00:11:18.090 "reset": true, 00:11:18.090 "nvme_admin": false, 00:11:18.090 "nvme_io": false, 00:11:18.090 "nvme_io_md": false, 00:11:18.090 "write_zeroes": true, 00:11:18.090 "zcopy": true, 00:11:18.090 "get_zone_info": false, 00:11:18.090 "zone_management": false, 00:11:18.090 "zone_append": false, 00:11:18.090 "compare": false, 00:11:18.090 "compare_and_write": false, 00:11:18.090 "abort": true, 00:11:18.090 "seek_hole": false, 00:11:18.090 "seek_data": false, 00:11:18.090 "copy": true, 00:11:18.090 "nvme_iov_md": false 00:11:18.090 }, 00:11:18.090 "memory_domains": [ 00:11:18.090 { 00:11:18.090 "dma_device_id": "system", 00:11:18.090 "dma_device_type": 1 00:11:18.090 }, 00:11:18.090 { 00:11:18.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.091 "dma_device_type": 2 00:11:18.091 } 00:11:18.091 ], 00:11:18.091 "driver_specific": {} 00:11:18.091 }' 00:11:18.091 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:18.091 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:18.091 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:18.091 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:18.091 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:18.091 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:18.091 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:18.349 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:18.349 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:18.349 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:18.349 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:18.349 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:18.349 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:18.607 [2024-07-25 10:26:22.146028] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.607 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.866 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:18.866 "name": "Existed_Raid", 00:11:18.866 "uuid": "37f00a20-cf95-4808-97c5-68c18bbd13e1", 00:11:18.866 "strip_size_kb": 0, 00:11:18.866 "state": "online", 00:11:18.866 "raid_level": "raid1", 00:11:18.866 "superblock": true, 00:11:18.866 "num_base_bdevs": 2, 00:11:18.866 "num_base_bdevs_discovered": 1, 00:11:18.866 "num_base_bdevs_operational": 1, 00:11:18.866 "base_bdevs_list": [ 00:11:18.866 { 00:11:18.866 "name": null, 00:11:18.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.866 "is_configured": false, 00:11:18.866 "data_offset": 2048, 00:11:18.866 "data_size": 63488 00:11:18.866 }, 00:11:18.866 { 00:11:18.866 "name": "BaseBdev2", 00:11:18.866 "uuid": "98246a88-7d90-49e1-8aeb-5d442361f11f", 00:11:18.866 "is_configured": true, 00:11:18.866 "data_offset": 2048, 00:11:18.866 "data_size": 63488 00:11:18.866 } 00:11:18.866 ] 00:11:18.866 }' 00:11:18.866 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:18.866 10:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.432 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:19.432 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:19.432 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.432 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:19.690 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:19.690 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.690 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:19.947 [2024-07-25 10:26:23.488324] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.947 [2024-07-25 10:26:23.488437] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.947 [2024-07-25 10:26:23.502472] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.947 [2024-07-25 10:26:23.502535] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.947 [2024-07-25 10:26:23.502548] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x148eb40 name Existed_Raid, state offline 00:11:19.947 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:19.947 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:19.947 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.947 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2345665 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2345665 ']' 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2345665 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2345665 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2345665' 00:11:20.209 killing process with pid 2345665 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2345665 00:11:20.209 [2024-07-25 10:26:23.792754] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:20.209 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2345665 00:11:20.209 [2024-07-25 10:26:23.793980] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.467 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:20.467 00:11:20.467 real 0m10.578s 00:11:20.467 user 0m19.098s 00:11:20.467 sys 0m1.504s 00:11:20.467 10:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.467 10:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.467 ************************************ 00:11:20.467 END TEST raid_state_function_test_sb 00:11:20.467 ************************************ 00:11:20.467 10:26:24 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:20.467 10:26:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:20.467 10:26:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:20.467 10:26:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.467 ************************************ 00:11:20.467 START TEST raid_superblock_test 00:11:20.467 ************************************ 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2347114 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2347114 /var/tmp/spdk-raid.sock 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2347114 ']' 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:20.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:20.467 10:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.726 [2024-07-25 10:26:24.186645] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:11:20.726 [2024-07-25 10:26:24.186730] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347114 ] 00:11:20.726 [2024-07-25 10:26:24.263481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.726 [2024-07-25 10:26:24.377936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.984 [2024-07-25 10:26:24.446272] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.984 [2024-07-25 10:26:24.446308] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.550 10:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.550 10:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:21.550 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:21.550 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:21.550 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:21.550 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:21.550 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:21.550 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:21.550 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:21.550 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:21.550 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:21.808 malloc1 00:11:21.808 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:22.067 [2024-07-25 10:26:25.705523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:22.067 [2024-07-25 10:26:25.705593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.067 [2024-07-25 10:26:25.705624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11ac2b0 00:11:22.067 [2024-07-25 10:26:25.705641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.067 [2024-07-25 10:26:25.707549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.067 [2024-07-25 10:26:25.707577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:22.067 pt1 00:11:22.067 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:22.067 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:22.067 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:22.067 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:22.067 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:22.067 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:22.067 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:22.067 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:22.067 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:22.324 malloc2 00:11:22.324 10:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:22.582 [2024-07-25 10:26:26.197461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:22.582 [2024-07-25 10:26:26.197510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.582 [2024-07-25 10:26:26.197531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x135f1e0 00:11:22.582 [2024-07-25 10:26:26.197546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.582 [2024-07-25 10:26:26.198990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.582 [2024-07-25 10:26:26.199018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:22.582 pt2 00:11:22.582 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:22.582 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:22.582 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:11:22.841 [2024-07-25 10:26:26.490282] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:22.841 [2024-07-25 10:26:26.491788] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:22.841 [2024-07-25 10:26:26.491993] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1343df0 00:11:22.841 [2024-07-25 10:26:26.492011] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:22.841 [2024-07-25 10:26:26.492262] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13449b0 00:11:22.841 [2024-07-25 10:26:26.492458] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1343df0 00:11:22.841 [2024-07-25 10:26:26.492475] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1343df0 00:11:22.841 [2024-07-25 10:26:26.492623] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.841 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:22.841 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:22.841 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:22.841 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:22.841 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:22.841 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:22.841 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:22.841 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:22.841 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:22.841 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:22.841 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.841 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.099 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:23.099 "name": "raid_bdev1", 00:11:23.099 "uuid": "7271f51e-eaf6-410b-a548-ac41828db5a8", 00:11:23.099 "strip_size_kb": 0, 00:11:23.099 "state": "online", 00:11:23.099 "raid_level": "raid1", 00:11:23.099 "superblock": true, 00:11:23.099 "num_base_bdevs": 2, 00:11:23.099 "num_base_bdevs_discovered": 2, 00:11:23.099 "num_base_bdevs_operational": 2, 00:11:23.099 "base_bdevs_list": [ 00:11:23.099 { 00:11:23.099 "name": "pt1", 00:11:23.099 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:23.099 "is_configured": true, 00:11:23.099 "data_offset": 2048, 00:11:23.099 "data_size": 63488 00:11:23.099 }, 00:11:23.099 { 00:11:23.099 "name": "pt2", 00:11:23.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:23.099 "is_configured": true, 00:11:23.099 "data_offset": 2048, 00:11:23.099 "data_size": 63488 00:11:23.099 } 00:11:23.099 ] 00:11:23.099 }' 00:11:23.099 10:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:23.099 10:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:24.034 [2024-07-25 10:26:27.605468] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:24.034 "name": "raid_bdev1", 00:11:24.034 "aliases": [ 00:11:24.034 "7271f51e-eaf6-410b-a548-ac41828db5a8" 00:11:24.034 ], 00:11:24.034 "product_name": "Raid Volume", 00:11:24.034 "block_size": 512, 00:11:24.034 "num_blocks": 63488, 00:11:24.034 "uuid": "7271f51e-eaf6-410b-a548-ac41828db5a8", 00:11:24.034 "assigned_rate_limits": { 00:11:24.034 "rw_ios_per_sec": 0, 00:11:24.034 "rw_mbytes_per_sec": 0, 00:11:24.034 "r_mbytes_per_sec": 0, 00:11:24.034 "w_mbytes_per_sec": 0 00:11:24.034 }, 00:11:24.034 "claimed": false, 00:11:24.034 "zoned": false, 00:11:24.034 "supported_io_types": { 00:11:24.034 "read": true, 00:11:24.034 "write": true, 00:11:24.034 "unmap": false, 00:11:24.034 "flush": false, 00:11:24.034 "reset": true, 00:11:24.034 "nvme_admin": false, 00:11:24.034 "nvme_io": false, 00:11:24.034 "nvme_io_md": false, 00:11:24.034 "write_zeroes": true, 00:11:24.034 "zcopy": false, 00:11:24.034 "get_zone_info": false, 00:11:24.034 "zone_management": false, 00:11:24.034 "zone_append": false, 00:11:24.034 "compare": false, 00:11:24.034 "compare_and_write": false, 00:11:24.034 "abort": false, 00:11:24.034 "seek_hole": false, 00:11:24.034 "seek_data": false, 00:11:24.034 "copy": false, 00:11:24.034 "nvme_iov_md": false 00:11:24.034 }, 00:11:24.034 "memory_domains": [ 00:11:24.034 { 00:11:24.034 "dma_device_id": "system", 00:11:24.034 "dma_device_type": 1 00:11:24.034 }, 00:11:24.034 { 00:11:24.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.034 "dma_device_type": 2 00:11:24.034 }, 00:11:24.034 { 00:11:24.034 "dma_device_id": "system", 00:11:24.034 "dma_device_type": 1 00:11:24.034 }, 00:11:24.034 { 00:11:24.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.034 "dma_device_type": 2 00:11:24.034 } 00:11:24.034 ], 00:11:24.034 "driver_specific": { 00:11:24.034 "raid": { 00:11:24.034 "uuid": "7271f51e-eaf6-410b-a548-ac41828db5a8", 00:11:24.034 "strip_size_kb": 0, 00:11:24.034 "state": "online", 00:11:24.034 "raid_level": "raid1", 00:11:24.034 "superblock": true, 00:11:24.034 "num_base_bdevs": 2, 00:11:24.034 "num_base_bdevs_discovered": 2, 00:11:24.034 "num_base_bdevs_operational": 2, 00:11:24.034 "base_bdevs_list": [ 00:11:24.034 { 00:11:24.034 "name": "pt1", 00:11:24.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.034 "is_configured": true, 00:11:24.034 "data_offset": 2048, 00:11:24.034 "data_size": 63488 00:11:24.034 }, 00:11:24.034 { 00:11:24.034 "name": "pt2", 00:11:24.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.034 "is_configured": true, 00:11:24.034 "data_offset": 2048, 00:11:24.034 "data_size": 63488 00:11:24.034 } 00:11:24.034 ] 00:11:24.034 } 00:11:24.034 } 00:11:24.034 }' 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:24.034 pt2' 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:24.034 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:24.301 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:24.301 "name": "pt1", 00:11:24.301 "aliases": [ 00:11:24.301 "00000000-0000-0000-0000-000000000001" 00:11:24.301 ], 00:11:24.301 "product_name": "passthru", 00:11:24.301 "block_size": 512, 00:11:24.301 "num_blocks": 65536, 00:11:24.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.301 "assigned_rate_limits": { 00:11:24.301 "rw_ios_per_sec": 0, 00:11:24.301 "rw_mbytes_per_sec": 0, 00:11:24.301 "r_mbytes_per_sec": 0, 00:11:24.301 "w_mbytes_per_sec": 0 00:11:24.301 }, 00:11:24.301 "claimed": true, 00:11:24.301 "claim_type": "exclusive_write", 00:11:24.301 "zoned": false, 00:11:24.301 "supported_io_types": { 00:11:24.301 "read": true, 00:11:24.301 "write": true, 00:11:24.301 "unmap": true, 00:11:24.301 "flush": true, 00:11:24.301 "reset": true, 00:11:24.301 "nvme_admin": false, 00:11:24.301 "nvme_io": false, 00:11:24.301 "nvme_io_md": false, 00:11:24.301 "write_zeroes": true, 00:11:24.301 "zcopy": true, 00:11:24.301 "get_zone_info": false, 00:11:24.301 "zone_management": false, 00:11:24.301 "zone_append": false, 00:11:24.301 "compare": false, 00:11:24.301 "compare_and_write": false, 00:11:24.301 "abort": true, 00:11:24.301 "seek_hole": false, 00:11:24.301 "seek_data": false, 00:11:24.301 "copy": true, 00:11:24.301 "nvme_iov_md": false 00:11:24.301 }, 00:11:24.301 "memory_domains": [ 00:11:24.301 { 00:11:24.301 "dma_device_id": "system", 00:11:24.301 "dma_device_type": 1 00:11:24.301 }, 00:11:24.301 { 00:11:24.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.301 "dma_device_type": 2 00:11:24.301 } 00:11:24.301 ], 00:11:24.301 "driver_specific": { 00:11:24.301 "passthru": { 00:11:24.301 "name": "pt1", 00:11:24.301 "base_bdev_name": "malloc1" 00:11:24.301 } 00:11:24.301 } 00:11:24.301 }' 00:11:24.301 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:24.301 10:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:24.301 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:24.301 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:24.566 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:24.566 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:24.566 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:24.566 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:24.566 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:24.566 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:24.566 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:24.566 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:24.566 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:24.566 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:24.566 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:24.825 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:24.825 "name": "pt2", 00:11:24.825 "aliases": [ 00:11:24.825 "00000000-0000-0000-0000-000000000002" 00:11:24.825 ], 00:11:24.825 "product_name": "passthru", 00:11:24.825 "block_size": 512, 00:11:24.825 "num_blocks": 65536, 00:11:24.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.825 "assigned_rate_limits": { 00:11:24.825 "rw_ios_per_sec": 0, 00:11:24.825 "rw_mbytes_per_sec": 0, 00:11:24.825 "r_mbytes_per_sec": 0, 00:11:24.825 "w_mbytes_per_sec": 0 00:11:24.825 }, 00:11:24.825 "claimed": true, 00:11:24.825 "claim_type": "exclusive_write", 00:11:24.825 "zoned": false, 00:11:24.825 "supported_io_types": { 00:11:24.825 "read": true, 00:11:24.825 "write": true, 00:11:24.825 "unmap": true, 00:11:24.825 "flush": true, 00:11:24.825 "reset": true, 00:11:24.825 "nvme_admin": false, 00:11:24.825 "nvme_io": false, 00:11:24.825 "nvme_io_md": false, 00:11:24.825 "write_zeroes": true, 00:11:24.825 "zcopy": true, 00:11:24.825 "get_zone_info": false, 00:11:24.825 "zone_management": false, 00:11:24.825 "zone_append": false, 00:11:24.825 "compare": false, 00:11:24.825 "compare_and_write": false, 00:11:24.825 "abort": true, 00:11:24.825 "seek_hole": false, 00:11:24.825 "seek_data": false, 00:11:24.825 "copy": true, 00:11:24.825 "nvme_iov_md": false 00:11:24.825 }, 00:11:24.825 "memory_domains": [ 00:11:24.825 { 00:11:24.825 "dma_device_id": "system", 00:11:24.825 "dma_device_type": 1 00:11:24.825 }, 00:11:24.825 { 00:11:24.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.825 "dma_device_type": 2 00:11:24.825 } 00:11:24.825 ], 00:11:24.825 "driver_specific": { 00:11:24.825 "passthru": { 00:11:24.825 "name": "pt2", 00:11:24.825 "base_bdev_name": "malloc2" 00:11:24.825 } 00:11:24.825 } 00:11:24.825 }' 00:11:24.825 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:24.825 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:25.083 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:25.083 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:25.083 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:25.083 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:25.083 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:25.083 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:25.083 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:25.083 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:25.083 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:25.083 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:25.083 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:25.083 10:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:25.341 [2024-07-25 10:26:29.001178] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.341 10:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7271f51e-eaf6-410b-a548-ac41828db5a8 00:11:25.341 10:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 7271f51e-eaf6-410b-a548-ac41828db5a8 ']' 00:11:25.341 10:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:25.599 [2024-07-25 10:26:29.245538] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.600 [2024-07-25 10:26:29.245558] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.600 [2024-07-25 10:26:29.245627] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.600 [2024-07-25 10:26:29.245698] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.600 [2024-07-25 10:26:29.245712] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1343df0 name raid_bdev1, state offline 00:11:25.600 10:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.600 10:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:11:25.857 10:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:11:25.857 10:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:11:25.857 10:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.857 10:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:26.116 10:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.116 10:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:26.682 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:26.940 [2024-07-25 10:26:30.573058] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:26.940 [2024-07-25 10:26:30.574389] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:26.940 [2024-07-25 10:26:30.574451] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:26.940 [2024-07-25 10:26:30.574516] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:26.940 [2024-07-25 10:26:30.574539] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.940 [2024-07-25 10:26:30.574549] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11adb80 name raid_bdev1, state configuring 00:11:26.940 request: 00:11:26.940 { 00:11:26.940 "name": "raid_bdev1", 00:11:26.940 "raid_level": "raid1", 00:11:26.940 "base_bdevs": [ 00:11:26.940 "malloc1", 00:11:26.940 "malloc2" 00:11:26.940 ], 00:11:26.940 "superblock": false, 00:11:26.940 "method": "bdev_raid_create", 00:11:26.940 "req_id": 1 00:11:26.940 } 00:11:26.940 Got JSON-RPC error response 00:11:26.940 response: 00:11:26.940 { 00:11:26.940 "code": -17, 00:11:26.940 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:26.940 } 00:11:26.940 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:26.940 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:26.940 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:26.940 10:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:26.940 10:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.940 10:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:11:27.198 10:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:11:27.198 10:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:11:27.198 10:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:27.456 [2024-07-25 10:26:31.070324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:27.456 [2024-07-25 10:26:31.070380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.456 [2024-07-25 10:26:31.070429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13447b0 00:11:27.456 [2024-07-25 10:26:31.070443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.456 [2024-07-25 10:26:31.072218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.456 [2024-07-25 10:26:31.072244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:27.456 [2024-07-25 10:26:31.072325] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:27.456 [2024-07-25 10:26:31.072359] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:27.456 pt1 00:11:27.456 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:27.456 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:27.456 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:27.456 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:27.456 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:27.456 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:27.456 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:27.456 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:27.456 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:27.456 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:27.456 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.456 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.714 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:27.714 "name": "raid_bdev1", 00:11:27.714 "uuid": "7271f51e-eaf6-410b-a548-ac41828db5a8", 00:11:27.714 "strip_size_kb": 0, 00:11:27.714 "state": "configuring", 00:11:27.714 "raid_level": "raid1", 00:11:27.714 "superblock": true, 00:11:27.714 "num_base_bdevs": 2, 00:11:27.714 "num_base_bdevs_discovered": 1, 00:11:27.714 "num_base_bdevs_operational": 2, 00:11:27.714 "base_bdevs_list": [ 00:11:27.714 { 00:11:27.714 "name": "pt1", 00:11:27.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.714 "is_configured": true, 00:11:27.714 "data_offset": 2048, 00:11:27.714 "data_size": 63488 00:11:27.714 }, 00:11:27.714 { 00:11:27.714 "name": null, 00:11:27.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.714 "is_configured": false, 00:11:27.714 "data_offset": 2048, 00:11:27.714 "data_size": 63488 00:11:27.714 } 00:11:27.714 ] 00:11:27.714 }' 00:11:27.714 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:27.714 10:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.279 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:11:28.279 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:11:28.279 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:28.279 10:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:28.537 [2024-07-25 10:26:32.113081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:28.537 [2024-07-25 10:26:32.113163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.537 [2024-07-25 10:26:32.113188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11ab8c0 00:11:28.537 [2024-07-25 10:26:32.113203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.537 [2024-07-25 10:26:32.113621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.537 [2024-07-25 10:26:32.113646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:28.537 [2024-07-25 10:26:32.113724] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:28.537 [2024-07-25 10:26:32.113752] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:28.537 [2024-07-25 10:26:32.113875] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x11ab3d0 00:11:28.537 [2024-07-25 10:26:32.113892] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:28.537 [2024-07-25 10:26:32.114062] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1343dc0 00:11:28.537 [2024-07-25 10:26:32.114247] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11ab3d0 00:11:28.537 [2024-07-25 10:26:32.114264] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11ab3d0 00:11:28.537 [2024-07-25 10:26:32.114375] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.537 pt2 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.537 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.797 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:28.797 "name": "raid_bdev1", 00:11:28.797 "uuid": "7271f51e-eaf6-410b-a548-ac41828db5a8", 00:11:28.797 "strip_size_kb": 0, 00:11:28.797 "state": "online", 00:11:28.797 "raid_level": "raid1", 00:11:28.797 "superblock": true, 00:11:28.797 "num_base_bdevs": 2, 00:11:28.797 "num_base_bdevs_discovered": 2, 00:11:28.797 "num_base_bdevs_operational": 2, 00:11:28.797 "base_bdevs_list": [ 00:11:28.797 { 00:11:28.797 "name": "pt1", 00:11:28.797 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.797 "is_configured": true, 00:11:28.797 "data_offset": 2048, 00:11:28.797 "data_size": 63488 00:11:28.797 }, 00:11:28.797 { 00:11:28.797 "name": "pt2", 00:11:28.797 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.797 "is_configured": true, 00:11:28.797 "data_offset": 2048, 00:11:28.797 "data_size": 63488 00:11:28.797 } 00:11:28.797 ] 00:11:28.797 }' 00:11:28.797 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:28.797 10:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.363 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:11:29.363 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:29.363 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:29.363 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:29.363 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:29.363 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:29.363 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:29.363 10:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:29.621 [2024-07-25 10:26:33.148089] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.621 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:29.621 "name": "raid_bdev1", 00:11:29.621 "aliases": [ 00:11:29.621 "7271f51e-eaf6-410b-a548-ac41828db5a8" 00:11:29.621 ], 00:11:29.621 "product_name": "Raid Volume", 00:11:29.621 "block_size": 512, 00:11:29.621 "num_blocks": 63488, 00:11:29.621 "uuid": "7271f51e-eaf6-410b-a548-ac41828db5a8", 00:11:29.621 "assigned_rate_limits": { 00:11:29.621 "rw_ios_per_sec": 0, 00:11:29.621 "rw_mbytes_per_sec": 0, 00:11:29.621 "r_mbytes_per_sec": 0, 00:11:29.621 "w_mbytes_per_sec": 0 00:11:29.621 }, 00:11:29.621 "claimed": false, 00:11:29.621 "zoned": false, 00:11:29.621 "supported_io_types": { 00:11:29.621 "read": true, 00:11:29.621 "write": true, 00:11:29.621 "unmap": false, 00:11:29.621 "flush": false, 00:11:29.621 "reset": true, 00:11:29.621 "nvme_admin": false, 00:11:29.621 "nvme_io": false, 00:11:29.621 "nvme_io_md": false, 00:11:29.621 "write_zeroes": true, 00:11:29.621 "zcopy": false, 00:11:29.621 "get_zone_info": false, 00:11:29.621 "zone_management": false, 00:11:29.621 "zone_append": false, 00:11:29.621 "compare": false, 00:11:29.621 "compare_and_write": false, 00:11:29.621 "abort": false, 00:11:29.621 "seek_hole": false, 00:11:29.621 "seek_data": false, 00:11:29.621 "copy": false, 00:11:29.621 "nvme_iov_md": false 00:11:29.621 }, 00:11:29.621 "memory_domains": [ 00:11:29.621 { 00:11:29.621 "dma_device_id": "system", 00:11:29.621 "dma_device_type": 1 00:11:29.621 }, 00:11:29.621 { 00:11:29.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.621 "dma_device_type": 2 00:11:29.621 }, 00:11:29.621 { 00:11:29.621 "dma_device_id": "system", 00:11:29.621 "dma_device_type": 1 00:11:29.621 }, 00:11:29.621 { 00:11:29.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.621 "dma_device_type": 2 00:11:29.621 } 00:11:29.621 ], 00:11:29.621 "driver_specific": { 00:11:29.621 "raid": { 00:11:29.621 "uuid": "7271f51e-eaf6-410b-a548-ac41828db5a8", 00:11:29.621 "strip_size_kb": 0, 00:11:29.621 "state": "online", 00:11:29.621 "raid_level": "raid1", 00:11:29.621 "superblock": true, 00:11:29.621 "num_base_bdevs": 2, 00:11:29.621 "num_base_bdevs_discovered": 2, 00:11:29.621 "num_base_bdevs_operational": 2, 00:11:29.621 "base_bdevs_list": [ 00:11:29.621 { 00:11:29.621 "name": "pt1", 00:11:29.621 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:29.621 "is_configured": true, 00:11:29.621 "data_offset": 2048, 00:11:29.621 "data_size": 63488 00:11:29.621 }, 00:11:29.621 { 00:11:29.621 "name": "pt2", 00:11:29.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.621 "is_configured": true, 00:11:29.621 "data_offset": 2048, 00:11:29.621 "data_size": 63488 00:11:29.621 } 00:11:29.621 ] 00:11:29.621 } 00:11:29.621 } 00:11:29.621 }' 00:11:29.621 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.621 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:29.621 pt2' 00:11:29.621 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:29.622 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:29.622 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:29.879 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:29.879 "name": "pt1", 00:11:29.879 "aliases": [ 00:11:29.879 "00000000-0000-0000-0000-000000000001" 00:11:29.879 ], 00:11:29.879 "product_name": "passthru", 00:11:29.879 "block_size": 512, 00:11:29.879 "num_blocks": 65536, 00:11:29.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:29.879 "assigned_rate_limits": { 00:11:29.879 "rw_ios_per_sec": 0, 00:11:29.879 "rw_mbytes_per_sec": 0, 00:11:29.879 "r_mbytes_per_sec": 0, 00:11:29.879 "w_mbytes_per_sec": 0 00:11:29.879 }, 00:11:29.879 "claimed": true, 00:11:29.879 "claim_type": "exclusive_write", 00:11:29.879 "zoned": false, 00:11:29.879 "supported_io_types": { 00:11:29.879 "read": true, 00:11:29.879 "write": true, 00:11:29.879 "unmap": true, 00:11:29.879 "flush": true, 00:11:29.879 "reset": true, 00:11:29.879 "nvme_admin": false, 00:11:29.879 "nvme_io": false, 00:11:29.879 "nvme_io_md": false, 00:11:29.879 "write_zeroes": true, 00:11:29.879 "zcopy": true, 00:11:29.879 "get_zone_info": false, 00:11:29.879 "zone_management": false, 00:11:29.879 "zone_append": false, 00:11:29.879 "compare": false, 00:11:29.879 "compare_and_write": false, 00:11:29.879 "abort": true, 00:11:29.879 "seek_hole": false, 00:11:29.879 "seek_data": false, 00:11:29.879 "copy": true, 00:11:29.879 "nvme_iov_md": false 00:11:29.879 }, 00:11:29.879 "memory_domains": [ 00:11:29.879 { 00:11:29.879 "dma_device_id": "system", 00:11:29.879 "dma_device_type": 1 00:11:29.879 }, 00:11:29.879 { 00:11:29.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.879 "dma_device_type": 2 00:11:29.879 } 00:11:29.879 ], 00:11:29.879 "driver_specific": { 00:11:29.879 "passthru": { 00:11:29.879 "name": "pt1", 00:11:29.879 "base_bdev_name": "malloc1" 00:11:29.879 } 00:11:29.879 } 00:11:29.879 }' 00:11:29.879 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:29.879 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:29.879 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:29.879 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:29.879 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:30.136 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:30.136 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:30.136 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:30.136 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:30.136 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:30.136 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:30.136 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:30.136 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:30.136 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:30.136 10:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:30.394 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:30.394 "name": "pt2", 00:11:30.394 "aliases": [ 00:11:30.394 "00000000-0000-0000-0000-000000000002" 00:11:30.394 ], 00:11:30.394 "product_name": "passthru", 00:11:30.394 "block_size": 512, 00:11:30.394 "num_blocks": 65536, 00:11:30.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:30.394 "assigned_rate_limits": { 00:11:30.394 "rw_ios_per_sec": 0, 00:11:30.394 "rw_mbytes_per_sec": 0, 00:11:30.394 "r_mbytes_per_sec": 0, 00:11:30.394 "w_mbytes_per_sec": 0 00:11:30.394 }, 00:11:30.394 "claimed": true, 00:11:30.395 "claim_type": "exclusive_write", 00:11:30.395 "zoned": false, 00:11:30.395 "supported_io_types": { 00:11:30.395 "read": true, 00:11:30.395 "write": true, 00:11:30.395 "unmap": true, 00:11:30.395 "flush": true, 00:11:30.395 "reset": true, 00:11:30.395 "nvme_admin": false, 00:11:30.395 "nvme_io": false, 00:11:30.395 "nvme_io_md": false, 00:11:30.395 "write_zeroes": true, 00:11:30.395 "zcopy": true, 00:11:30.395 "get_zone_info": false, 00:11:30.395 "zone_management": false, 00:11:30.395 "zone_append": false, 00:11:30.395 "compare": false, 00:11:30.395 "compare_and_write": false, 00:11:30.395 "abort": true, 00:11:30.395 "seek_hole": false, 00:11:30.395 "seek_data": false, 00:11:30.395 "copy": true, 00:11:30.395 "nvme_iov_md": false 00:11:30.395 }, 00:11:30.395 "memory_domains": [ 00:11:30.395 { 00:11:30.395 "dma_device_id": "system", 00:11:30.395 "dma_device_type": 1 00:11:30.395 }, 00:11:30.395 { 00:11:30.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.395 "dma_device_type": 2 00:11:30.395 } 00:11:30.395 ], 00:11:30.395 "driver_specific": { 00:11:30.395 "passthru": { 00:11:30.395 "name": "pt2", 00:11:30.395 "base_bdev_name": "malloc2" 00:11:30.395 } 00:11:30.395 } 00:11:30.395 }' 00:11:30.395 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:30.395 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:30.395 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:30.395 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:30.652 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:30.653 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:30.653 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:30.653 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:30.653 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:30.653 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:30.653 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:30.653 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:30.653 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:30.653 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:30.911 [2024-07-25 10:26:34.543803] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.911 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 7271f51e-eaf6-410b-a548-ac41828db5a8 '!=' 7271f51e-eaf6-410b-a548-ac41828db5a8 ']' 00:11:30.911 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:11:30.911 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:30.911 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:30.911 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:31.169 [2024-07-25 10:26:34.840372] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:31.169 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:31.169 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:31.169 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:31.169 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:31.169 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:31.169 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:31.169 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:31.169 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:31.169 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:31.169 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:31.169 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.169 10:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.733 10:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.733 "name": "raid_bdev1", 00:11:31.733 "uuid": "7271f51e-eaf6-410b-a548-ac41828db5a8", 00:11:31.733 "strip_size_kb": 0, 00:11:31.733 "state": "online", 00:11:31.733 "raid_level": "raid1", 00:11:31.733 "superblock": true, 00:11:31.733 "num_base_bdevs": 2, 00:11:31.733 "num_base_bdevs_discovered": 1, 00:11:31.733 "num_base_bdevs_operational": 1, 00:11:31.733 "base_bdevs_list": [ 00:11:31.733 { 00:11:31.733 "name": null, 00:11:31.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.733 "is_configured": false, 00:11:31.733 "data_offset": 2048, 00:11:31.733 "data_size": 63488 00:11:31.733 }, 00:11:31.733 { 00:11:31.733 "name": "pt2", 00:11:31.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:31.733 "is_configured": true, 00:11:31.733 "data_offset": 2048, 00:11:31.733 "data_size": 63488 00:11:31.733 } 00:11:31.733 ] 00:11:31.733 }' 00:11:31.733 10:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.733 10:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.297 10:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:32.297 [2024-07-25 10:26:35.919169] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.297 [2024-07-25 10:26:35.919198] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.297 [2024-07-25 10:26:35.919285] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.297 [2024-07-25 10:26:35.919348] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.297 [2024-07-25 10:26:35.919363] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11ab3d0 name raid_bdev1, state offline 00:11:32.297 10:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.297 10:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:11:32.553 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:11:32.553 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:11:32.553 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:11:32.553 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:11:32.553 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:32.810 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:11:32.811 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:11:32.811 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:11:32.811 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:11:32.811 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:11:32.811 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:33.069 [2024-07-25 10:26:36.645047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:33.069 [2024-07-25 10:26:36.645126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.069 [2024-07-25 10:26:36.645150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11ab650 00:11:33.069 [2024-07-25 10:26:36.645166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.069 [2024-07-25 10:26:36.646975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.069 [2024-07-25 10:26:36.647012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:33.069 [2024-07-25 10:26:36.647123] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:33.069 [2024-07-25 10:26:36.647162] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:33.069 [2024-07-25 10:26:36.647279] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1344210 00:11:33.069 [2024-07-25 10:26:36.647295] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:33.069 [2024-07-25 10:26:36.647473] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1346a50 00:11:33.069 [2024-07-25 10:26:36.647626] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1344210 00:11:33.069 [2024-07-25 10:26:36.647643] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1344210 00:11:33.069 [2024-07-25 10:26:36.647754] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.069 pt2 00:11:33.069 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:33.069 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:33.069 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:33.069 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:33.069 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:33.069 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:33.069 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:33.069 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:33.069 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:33.069 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:33.069 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.069 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.328 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:33.328 "name": "raid_bdev1", 00:11:33.328 "uuid": "7271f51e-eaf6-410b-a548-ac41828db5a8", 00:11:33.328 "strip_size_kb": 0, 00:11:33.328 "state": "online", 00:11:33.328 "raid_level": "raid1", 00:11:33.328 "superblock": true, 00:11:33.328 "num_base_bdevs": 2, 00:11:33.328 "num_base_bdevs_discovered": 1, 00:11:33.328 "num_base_bdevs_operational": 1, 00:11:33.328 "base_bdevs_list": [ 00:11:33.328 { 00:11:33.328 "name": null, 00:11:33.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.328 "is_configured": false, 00:11:33.328 "data_offset": 2048, 00:11:33.328 "data_size": 63488 00:11:33.328 }, 00:11:33.328 { 00:11:33.328 "name": "pt2", 00:11:33.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.328 "is_configured": true, 00:11:33.328 "data_offset": 2048, 00:11:33.328 "data_size": 63488 00:11:33.328 } 00:11:33.328 ] 00:11:33.328 }' 00:11:33.328 10:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:33.328 10:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.893 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:34.151 [2024-07-25 10:26:37.715867] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.151 [2024-07-25 10:26:37.715895] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.151 [2024-07-25 10:26:37.715969] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.151 [2024-07-25 10:26:37.716030] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.151 [2024-07-25 10:26:37.716044] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1344210 name raid_bdev1, state offline 00:11:34.151 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.151 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:11:34.408 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:11:34.408 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:11:34.408 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:11:34.408 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:34.666 [2024-07-25 10:26:38.213174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:34.666 [2024-07-25 10:26:38.213233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.666 [2024-07-25 10:26:38.213258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x135f410 00:11:34.666 [2024-07-25 10:26:38.213273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.666 [2024-07-25 10:26:38.214999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.666 [2024-07-25 10:26:38.215027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:34.666 [2024-07-25 10:26:38.215117] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:34.666 [2024-07-25 10:26:38.215154] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:34.666 [2024-07-25 10:26:38.215280] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:34.666 [2024-07-25 10:26:38.215299] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.666 [2024-07-25 10:26:38.215315] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1346130 name raid_bdev1, state configuring 00:11:34.666 [2024-07-25 10:26:38.215343] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:34.666 [2024-07-25 10:26:38.215422] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x11a3d90 00:11:34.666 [2024-07-25 10:26:38.215438] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:34.666 [2024-07-25 10:26:38.215604] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13456a0 00:11:34.666 [2024-07-25 10:26:38.215758] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11a3d90 00:11:34.666 [2024-07-25 10:26:38.215774] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11a3d90 00:11:34.666 [2024-07-25 10:26:38.215883] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.666 pt1 00:11:34.666 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:11:34.666 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:34.666 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:34.666 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:34.666 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:34.666 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:34.666 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:34.666 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:34.666 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:34.666 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:34.666 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:34.666 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.666 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.924 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:34.924 "name": "raid_bdev1", 00:11:34.924 "uuid": "7271f51e-eaf6-410b-a548-ac41828db5a8", 00:11:34.924 "strip_size_kb": 0, 00:11:34.924 "state": "online", 00:11:34.924 "raid_level": "raid1", 00:11:34.924 "superblock": true, 00:11:34.924 "num_base_bdevs": 2, 00:11:34.924 "num_base_bdevs_discovered": 1, 00:11:34.924 "num_base_bdevs_operational": 1, 00:11:34.924 "base_bdevs_list": [ 00:11:34.924 { 00:11:34.924 "name": null, 00:11:34.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.924 "is_configured": false, 00:11:34.924 "data_offset": 2048, 00:11:34.924 "data_size": 63488 00:11:34.924 }, 00:11:34.924 { 00:11:34.924 "name": "pt2", 00:11:34.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.924 "is_configured": true, 00:11:34.924 "data_offset": 2048, 00:11:34.924 "data_size": 63488 00:11:34.924 } 00:11:34.924 ] 00:11:34.924 }' 00:11:34.924 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:34.924 10:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:11:35.489 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:35.749 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:11:35.749 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:35.749 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:11:36.008 [2024-07-25 10:26:39.541074] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.008 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 7271f51e-eaf6-410b-a548-ac41828db5a8 '!=' 7271f51e-eaf6-410b-a548-ac41828db5a8 ']' 00:11:36.008 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2347114 00:11:36.008 10:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2347114 ']' 00:11:36.008 10:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2347114 00:11:36.008 10:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:36.008 10:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.008 10:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2347114 00:11:36.008 10:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:36.008 10:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:36.008 10:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2347114' 00:11:36.008 killing process with pid 2347114 00:11:36.008 10:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2347114 00:11:36.008 [2024-07-25 10:26:39.585932] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.008 10:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2347114 00:11:36.008 [2024-07-25 10:26:39.586011] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.008 [2024-07-25 10:26:39.586074] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.008 [2024-07-25 10:26:39.586089] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11a3d90 name raid_bdev1, state offline 00:11:36.008 [2024-07-25 10:26:39.609082] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:36.266 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:36.266 00:11:36.266 real 0m15.766s 00:11:36.266 user 0m29.046s 00:11:36.266 sys 0m2.184s 00:11:36.266 10:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.266 10:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.266 ************************************ 00:11:36.266 END TEST raid_superblock_test 00:11:36.266 ************************************ 00:11:36.266 10:26:39 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:36.266 10:26:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:36.266 10:26:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.266 10:26:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:36.266 ************************************ 00:11:36.266 START TEST raid_read_error_test 00:11:36.266 ************************************ 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.QCzgIwFfLD 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2349304 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2349304 /var/tmp/spdk-raid.sock 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2349304 ']' 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:36.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.266 10:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.524 [2024-07-25 10:26:40.012974] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:11:36.524 [2024-07-25 10:26:40.013062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2349304 ] 00:11:36.524 [2024-07-25 10:26:40.097568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.524 [2024-07-25 10:26:40.207752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.782 [2024-07-25 10:26:40.279375] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.782 [2024-07-25 10:26:40.279421] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.347 10:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:37.347 10:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:37.347 10:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:37.347 10:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:37.605 BaseBdev1_malloc 00:11:37.605 10:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:37.862 true 00:11:37.862 10:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:38.120 [2024-07-25 10:26:41.739598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:38.120 [2024-07-25 10:26:41.739661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.120 [2024-07-25 10:26:41.739690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1957250 00:11:38.120 [2024-07-25 10:26:41.739706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.120 [2024-07-25 10:26:41.741640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.120 [2024-07-25 10:26:41.741669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:38.120 BaseBdev1 00:11:38.121 10:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:38.121 10:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:38.379 BaseBdev2_malloc 00:11:38.379 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:38.637 true 00:11:38.637 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:38.895 [2024-07-25 10:26:42.493449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:38.895 [2024-07-25 10:26:42.493509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.895 [2024-07-25 10:26:42.493536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1946650 00:11:38.895 [2024-07-25 10:26:42.493552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.895 [2024-07-25 10:26:42.495310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.895 [2024-07-25 10:26:42.495338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:38.895 BaseBdev2 00:11:38.895 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:39.153 [2024-07-25 10:26:42.782260] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.153 [2024-07-25 10:26:42.783799] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.153 [2024-07-25 10:26:42.784047] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x193d7d0 00:11:39.153 [2024-07-25 10:26:42.784066] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:39.153 [2024-07-25 10:26:42.784309] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17ba3f0 00:11:39.153 [2024-07-25 10:26:42.784519] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x193d7d0 00:11:39.153 [2024-07-25 10:26:42.784535] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x193d7d0 00:11:39.153 [2024-07-25 10:26:42.784691] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.153 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:39.153 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:39.153 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:39.153 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:39.153 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:39.153 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:39.153 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.153 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.153 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.153 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.153 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.153 10:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.411 10:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.411 "name": "raid_bdev1", 00:11:39.411 "uuid": "2cb6af36-1bd8-4a73-8310-0248a51ec5e3", 00:11:39.411 "strip_size_kb": 0, 00:11:39.411 "state": "online", 00:11:39.411 "raid_level": "raid1", 00:11:39.411 "superblock": true, 00:11:39.411 "num_base_bdevs": 2, 00:11:39.411 "num_base_bdevs_discovered": 2, 00:11:39.411 "num_base_bdevs_operational": 2, 00:11:39.411 "base_bdevs_list": [ 00:11:39.411 { 00:11:39.411 "name": "BaseBdev1", 00:11:39.411 "uuid": "894bdcd6-a8f3-5831-bc41-041ed53734da", 00:11:39.411 "is_configured": true, 00:11:39.411 "data_offset": 2048, 00:11:39.411 "data_size": 63488 00:11:39.411 }, 00:11:39.411 { 00:11:39.411 "name": "BaseBdev2", 00:11:39.411 "uuid": "f7aef8ac-72bb-5892-a06a-117e59ca7beb", 00:11:39.411 "is_configured": true, 00:11:39.411 "data_offset": 2048, 00:11:39.411 "data_size": 63488 00:11:39.411 } 00:11:39.411 ] 00:11:39.411 }' 00:11:39.411 10:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.411 10:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.976 10:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:39.976 10:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:40.234 [2024-07-25 10:26:43.717190] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x179a430 00:11:41.168 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:41.168 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:41.168 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.426 10:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.426 10:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:41.426 "name": "raid_bdev1", 00:11:41.426 "uuid": "2cb6af36-1bd8-4a73-8310-0248a51ec5e3", 00:11:41.426 "strip_size_kb": 0, 00:11:41.426 "state": "online", 00:11:41.426 "raid_level": "raid1", 00:11:41.426 "superblock": true, 00:11:41.426 "num_base_bdevs": 2, 00:11:41.426 "num_base_bdevs_discovered": 2, 00:11:41.426 "num_base_bdevs_operational": 2, 00:11:41.426 "base_bdevs_list": [ 00:11:41.426 { 00:11:41.426 "name": "BaseBdev1", 00:11:41.426 "uuid": "894bdcd6-a8f3-5831-bc41-041ed53734da", 00:11:41.426 "is_configured": true, 00:11:41.426 "data_offset": 2048, 00:11:41.426 "data_size": 63488 00:11:41.426 }, 00:11:41.426 { 00:11:41.426 "name": "BaseBdev2", 00:11:41.426 "uuid": "f7aef8ac-72bb-5892-a06a-117e59ca7beb", 00:11:41.426 "is_configured": true, 00:11:41.426 "data_offset": 2048, 00:11:41.426 "data_size": 63488 00:11:41.426 } 00:11:41.426 ] 00:11:41.426 }' 00:11:41.426 10:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:41.426 10:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.361 10:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:42.361 [2024-07-25 10:26:45.955069] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.361 [2024-07-25 10:26:45.955145] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.361 [2024-07-25 10:26:45.958120] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.361 [2024-07-25 10:26:45.958158] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.361 [2024-07-25 10:26:45.958235] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.361 [2024-07-25 10:26:45.958251] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x193d7d0 name raid_bdev1, state offline 00:11:42.361 0 00:11:42.361 10:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2349304 00:11:42.361 10:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2349304 ']' 00:11:42.361 10:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2349304 00:11:42.361 10:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:42.361 10:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.361 10:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2349304 00:11:42.361 10:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:42.361 10:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:42.361 10:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2349304' 00:11:42.361 killing process with pid 2349304 00:11:42.361 10:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2349304 00:11:42.361 [2024-07-25 10:26:46.003722] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.361 10:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2349304 00:11:42.361 [2024-07-25 10:26:46.019374] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:42.619 10:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.QCzgIwFfLD 00:11:42.619 10:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:42.620 10:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:42.620 10:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:42.620 10:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:42.620 10:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:42.620 10:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:42.620 10:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:42.620 00:11:42.620 real 0m6.358s 00:11:42.620 user 0m10.040s 00:11:42.620 sys 0m0.944s 00:11:42.620 10:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.620 10:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.620 ************************************ 00:11:42.620 END TEST raid_read_error_test 00:11:42.620 ************************************ 00:11:42.620 10:26:46 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:42.620 10:26:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:42.620 10:26:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.620 10:26:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:42.878 ************************************ 00:11:42.878 START TEST raid_write_error_test 00:11:42.878 ************************************ 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.VenE2p1UZp 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2350195 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2350195 /var/tmp/spdk-raid.sock 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2350195 ']' 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:42.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:42.878 10:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.878 [2024-07-25 10:26:46.409413] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:11:42.878 [2024-07-25 10:26:46.409497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2350195 ] 00:11:42.878 [2024-07-25 10:26:46.489387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.136 [2024-07-25 10:26:46.601355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.136 [2024-07-25 10:26:46.673514] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.136 [2024-07-25 10:26:46.673567] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.701 10:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:43.701 10:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:43.701 10:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:43.701 10:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:43.959 BaseBdev1_malloc 00:11:43.959 10:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:44.217 true 00:11:44.217 10:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:44.474 [2024-07-25 10:26:48.124264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:44.474 [2024-07-25 10:26:48.124324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.474 [2024-07-25 10:26:48.124353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbeb250 00:11:44.474 [2024-07-25 10:26:48.124369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.474 [2024-07-25 10:26:48.126261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.474 [2024-07-25 10:26:48.126290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:44.474 BaseBdev1 00:11:44.475 10:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:44.475 10:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:44.732 BaseBdev2_malloc 00:11:44.732 10:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:44.991 true 00:11:44.991 10:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:45.251 [2024-07-25 10:26:48.914332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:45.251 [2024-07-25 10:26:48.914390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.251 [2024-07-25 10:26:48.914417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbda650 00:11:45.251 [2024-07-25 10:26:48.914433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.251 [2024-07-25 10:26:48.916189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.251 [2024-07-25 10:26:48.916217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:45.251 BaseBdev2 00:11:45.251 10:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:45.512 [2024-07-25 10:26:49.199131] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.512 [2024-07-25 10:26:49.200563] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.512 [2024-07-25 10:26:49.200810] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xbd17d0 00:11:45.512 [2024-07-25 10:26:49.200829] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.512 [2024-07-25 10:26:49.201062] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa4e3f0 00:11:45.512 [2024-07-25 10:26:49.201283] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbd17d0 00:11:45.512 [2024-07-25 10:26:49.201300] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbd17d0 00:11:45.512 [2024-07-25 10:26:49.201462] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.512 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:45.512 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:45.512 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:45.512 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:45.512 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:45.512 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:45.512 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:45.512 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:45.512 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:45.512 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:45.512 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.512 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.769 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:45.769 "name": "raid_bdev1", 00:11:45.769 "uuid": "4c3bd82e-1890-478c-b96f-298472f9f441", 00:11:45.769 "strip_size_kb": 0, 00:11:45.769 "state": "online", 00:11:45.769 "raid_level": "raid1", 00:11:45.769 "superblock": true, 00:11:45.769 "num_base_bdevs": 2, 00:11:45.769 "num_base_bdevs_discovered": 2, 00:11:45.769 "num_base_bdevs_operational": 2, 00:11:45.769 "base_bdevs_list": [ 00:11:45.769 { 00:11:45.769 "name": "BaseBdev1", 00:11:45.769 "uuid": "6064e7dc-be73-5602-b0f4-dddf3f3d1a4c", 00:11:45.769 "is_configured": true, 00:11:45.769 "data_offset": 2048, 00:11:45.769 "data_size": 63488 00:11:45.769 }, 00:11:45.769 { 00:11:45.769 "name": "BaseBdev2", 00:11:45.769 "uuid": "dd9c1faa-b5d9-5d70-a341-1eb39e0c3751", 00:11:45.769 "is_configured": true, 00:11:45.769 "data_offset": 2048, 00:11:45.769 "data_size": 63488 00:11:45.769 } 00:11:45.769 ] 00:11:45.769 }' 00:11:45.769 10:26:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:45.769 10:26:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.704 10:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:46.704 10:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:46.704 [2024-07-25 10:26:50.158092] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa2e430 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:47.671 [2024-07-25 10:26:51.332036] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:47.671 [2024-07-25 10:26:51.332100] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.671 [2024-07-25 10:26:51.332326] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xa2e430 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.671 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.929 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:47.929 "name": "raid_bdev1", 00:11:47.929 "uuid": "4c3bd82e-1890-478c-b96f-298472f9f441", 00:11:47.929 "strip_size_kb": 0, 00:11:47.929 "state": "online", 00:11:47.929 "raid_level": "raid1", 00:11:47.929 "superblock": true, 00:11:47.929 "num_base_bdevs": 2, 00:11:47.929 "num_base_bdevs_discovered": 1, 00:11:47.929 "num_base_bdevs_operational": 1, 00:11:47.929 "base_bdevs_list": [ 00:11:47.929 { 00:11:47.929 "name": null, 00:11:47.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.929 "is_configured": false, 00:11:47.930 "data_offset": 2048, 00:11:47.930 "data_size": 63488 00:11:47.930 }, 00:11:47.930 { 00:11:47.930 "name": "BaseBdev2", 00:11:47.930 "uuid": "dd9c1faa-b5d9-5d70-a341-1eb39e0c3751", 00:11:47.930 "is_configured": true, 00:11:47.930 "data_offset": 2048, 00:11:47.930 "data_size": 63488 00:11:47.930 } 00:11:47.930 ] 00:11:47.930 }' 00:11:47.930 10:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:47.930 10:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.496 10:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:48.755 [2024-07-25 10:26:52.392396] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.755 [2024-07-25 10:26:52.392449] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.755 [2024-07-25 10:26:52.395381] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.755 [2024-07-25 10:26:52.395417] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.755 [2024-07-25 10:26:52.395470] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.755 [2024-07-25 10:26:52.395485] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbd17d0 name raid_bdev1, state offline 00:11:48.755 0 00:11:48.755 10:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2350195 00:11:48.755 10:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2350195 ']' 00:11:48.755 10:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2350195 00:11:48.755 10:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:48.755 10:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:48.755 10:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2350195 00:11:48.755 10:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:48.755 10:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:48.755 10:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2350195' 00:11:48.755 killing process with pid 2350195 00:11:48.755 10:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2350195 00:11:48.755 [2024-07-25 10:26:52.444988] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.755 10:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2350195 00:11:48.755 [2024-07-25 10:26:52.459171] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.322 10:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.VenE2p1UZp 00:11:49.322 10:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:49.322 10:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:49.322 10:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:49.322 10:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:49.322 10:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:49.322 10:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:49.322 10:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:49.322 00:11:49.322 real 0m6.396s 00:11:49.322 user 0m10.217s 00:11:49.322 sys 0m0.869s 00:11:49.322 10:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.322 10:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.322 ************************************ 00:11:49.322 END TEST raid_write_error_test 00:11:49.322 ************************************ 00:11:49.322 10:26:52 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:11:49.322 10:26:52 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:49.322 10:26:52 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:49.322 10:26:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:49.322 10:26:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.322 10:26:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.322 ************************************ 00:11:49.322 START TEST raid_state_function_test 00:11:49.322 ************************************ 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:49.322 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2351087 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2351087' 00:11:49.323 Process raid pid: 2351087 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2351087 /var/tmp/spdk-raid.sock 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2351087 ']' 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:49.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:49.323 10:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.323 [2024-07-25 10:26:52.848127] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:11:49.323 [2024-07-25 10:26:52.848216] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.323 [2024-07-25 10:26:52.931306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.581 [2024-07-25 10:26:53.053867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.581 [2024-07-25 10:26:53.121960] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.581 [2024-07-25 10:26:53.122001] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.581 10:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:49.581 10:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:49.581 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:49.842 [2024-07-25 10:26:53.450011] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.842 [2024-07-25 10:26:53.450069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.842 [2024-07-25 10:26:53.450081] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.842 [2024-07-25 10:26:53.450095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.842 [2024-07-25 10:26:53.450115] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:49.842 [2024-07-25 10:26:53.450130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:49.842 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:49.842 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:49.842 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:49.842 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:49.842 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:49.842 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:49.842 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:49.842 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:49.842 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:49.842 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:49.842 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.842 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.101 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.101 "name": "Existed_Raid", 00:11:50.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.101 "strip_size_kb": 64, 00:11:50.101 "state": "configuring", 00:11:50.101 "raid_level": "raid0", 00:11:50.101 "superblock": false, 00:11:50.101 "num_base_bdevs": 3, 00:11:50.101 "num_base_bdevs_discovered": 0, 00:11:50.101 "num_base_bdevs_operational": 3, 00:11:50.101 "base_bdevs_list": [ 00:11:50.101 { 00:11:50.101 "name": "BaseBdev1", 00:11:50.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.101 "is_configured": false, 00:11:50.101 "data_offset": 0, 00:11:50.101 "data_size": 0 00:11:50.101 }, 00:11:50.101 { 00:11:50.101 "name": "BaseBdev2", 00:11:50.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.101 "is_configured": false, 00:11:50.101 "data_offset": 0, 00:11:50.101 "data_size": 0 00:11:50.101 }, 00:11:50.101 { 00:11:50.101 "name": "BaseBdev3", 00:11:50.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.101 "is_configured": false, 00:11:50.101 "data_offset": 0, 00:11:50.101 "data_size": 0 00:11:50.101 } 00:11:50.101 ] 00:11:50.101 }' 00:11:50.101 10:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.101 10:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.665 10:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:50.923 [2024-07-25 10:26:54.528736] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.923 [2024-07-25 10:26:54.528774] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a7620 name Existed_Raid, state configuring 00:11:50.923 10:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:51.181 [2024-07-25 10:26:54.777394] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.181 [2024-07-25 10:26:54.777442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.181 [2024-07-25 10:26:54.777453] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.181 [2024-07-25 10:26:54.777466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.181 [2024-07-25 10:26:54.777475] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.181 [2024-07-25 10:26:54.777488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.181 10:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:51.439 [2024-07-25 10:26:55.034093] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.439 BaseBdev1 00:11:51.439 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:51.439 10:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:51.439 10:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:51.439 10:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:51.439 10:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:51.439 10:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:51.439 10:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:51.696 10:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:51.954 [ 00:11:51.954 { 00:11:51.954 "name": "BaseBdev1", 00:11:51.954 "aliases": [ 00:11:51.954 "dcca70bf-dc54-494e-afac-97fcd67b93ce" 00:11:51.954 ], 00:11:51.954 "product_name": "Malloc disk", 00:11:51.954 "block_size": 512, 00:11:51.954 "num_blocks": 65536, 00:11:51.954 "uuid": "dcca70bf-dc54-494e-afac-97fcd67b93ce", 00:11:51.954 "assigned_rate_limits": { 00:11:51.954 "rw_ios_per_sec": 0, 00:11:51.954 "rw_mbytes_per_sec": 0, 00:11:51.954 "r_mbytes_per_sec": 0, 00:11:51.954 "w_mbytes_per_sec": 0 00:11:51.954 }, 00:11:51.954 "claimed": true, 00:11:51.954 "claim_type": "exclusive_write", 00:11:51.954 "zoned": false, 00:11:51.954 "supported_io_types": { 00:11:51.954 "read": true, 00:11:51.954 "write": true, 00:11:51.954 "unmap": true, 00:11:51.954 "flush": true, 00:11:51.954 "reset": true, 00:11:51.954 "nvme_admin": false, 00:11:51.954 "nvme_io": false, 00:11:51.954 "nvme_io_md": false, 00:11:51.954 "write_zeroes": true, 00:11:51.954 "zcopy": true, 00:11:51.954 "get_zone_info": false, 00:11:51.954 "zone_management": false, 00:11:51.954 "zone_append": false, 00:11:51.954 "compare": false, 00:11:51.954 "compare_and_write": false, 00:11:51.954 "abort": true, 00:11:51.954 "seek_hole": false, 00:11:51.954 "seek_data": false, 00:11:51.954 "copy": true, 00:11:51.954 "nvme_iov_md": false 00:11:51.954 }, 00:11:51.954 "memory_domains": [ 00:11:51.954 { 00:11:51.954 "dma_device_id": "system", 00:11:51.954 "dma_device_type": 1 00:11:51.954 }, 00:11:51.954 { 00:11:51.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.954 "dma_device_type": 2 00:11:51.954 } 00:11:51.954 ], 00:11:51.954 "driver_specific": {} 00:11:51.954 } 00:11:51.954 ] 00:11:51.954 10:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:51.954 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:51.954 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:51.954 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:51.954 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:51.954 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:51.954 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:51.954 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:51.954 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:51.954 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:51.954 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:51.954 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.954 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.212 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:52.212 "name": "Existed_Raid", 00:11:52.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.212 "strip_size_kb": 64, 00:11:52.212 "state": "configuring", 00:11:52.212 "raid_level": "raid0", 00:11:52.212 "superblock": false, 00:11:52.212 "num_base_bdevs": 3, 00:11:52.212 "num_base_bdevs_discovered": 1, 00:11:52.212 "num_base_bdevs_operational": 3, 00:11:52.212 "base_bdevs_list": [ 00:11:52.212 { 00:11:52.212 "name": "BaseBdev1", 00:11:52.212 "uuid": "dcca70bf-dc54-494e-afac-97fcd67b93ce", 00:11:52.212 "is_configured": true, 00:11:52.212 "data_offset": 0, 00:11:52.212 "data_size": 65536 00:11:52.212 }, 00:11:52.212 { 00:11:52.212 "name": "BaseBdev2", 00:11:52.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.212 "is_configured": false, 00:11:52.212 "data_offset": 0, 00:11:52.212 "data_size": 0 00:11:52.212 }, 00:11:52.212 { 00:11:52.212 "name": "BaseBdev3", 00:11:52.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.212 "is_configured": false, 00:11:52.212 "data_offset": 0, 00:11:52.212 "data_size": 0 00:11:52.212 } 00:11:52.212 ] 00:11:52.212 }' 00:11:52.212 10:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:52.212 10:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.778 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:53.036 [2024-07-25 10:26:56.542063] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.036 [2024-07-25 10:26:56.542123] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a6e50 name Existed_Raid, state configuring 00:11:53.036 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:53.294 [2024-07-25 10:26:56.782728] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.294 [2024-07-25 10:26:56.784248] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:53.294 [2024-07-25 10:26:56.784282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:53.294 [2024-07-25 10:26:56.784295] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:53.294 [2024-07-25 10:26:56.784308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.294 10:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.553 10:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:53.553 "name": "Existed_Raid", 00:11:53.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.553 "strip_size_kb": 64, 00:11:53.553 "state": "configuring", 00:11:53.553 "raid_level": "raid0", 00:11:53.553 "superblock": false, 00:11:53.553 "num_base_bdevs": 3, 00:11:53.553 "num_base_bdevs_discovered": 1, 00:11:53.553 "num_base_bdevs_operational": 3, 00:11:53.553 "base_bdevs_list": [ 00:11:53.553 { 00:11:53.553 "name": "BaseBdev1", 00:11:53.553 "uuid": "dcca70bf-dc54-494e-afac-97fcd67b93ce", 00:11:53.553 "is_configured": true, 00:11:53.553 "data_offset": 0, 00:11:53.553 "data_size": 65536 00:11:53.553 }, 00:11:53.553 { 00:11:53.553 "name": "BaseBdev2", 00:11:53.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.553 "is_configured": false, 00:11:53.553 "data_offset": 0, 00:11:53.553 "data_size": 0 00:11:53.553 }, 00:11:53.553 { 00:11:53.553 "name": "BaseBdev3", 00:11:53.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.553 "is_configured": false, 00:11:53.553 "data_offset": 0, 00:11:53.553 "data_size": 0 00:11:53.553 } 00:11:53.553 ] 00:11:53.553 }' 00:11:53.553 10:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:53.553 10:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.118 10:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:54.376 [2024-07-25 10:26:57.839838] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.376 BaseBdev2 00:11:54.376 10:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:54.376 10:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:54.376 10:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:54.376 10:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:54.376 10:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:54.376 10:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:54.376 10:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:54.633 10:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:54.633 [ 00:11:54.633 { 00:11:54.633 "name": "BaseBdev2", 00:11:54.634 "aliases": [ 00:11:54.634 "be73a614-bd51-41b7-b0fb-e27776fa3753" 00:11:54.634 ], 00:11:54.634 "product_name": "Malloc disk", 00:11:54.634 "block_size": 512, 00:11:54.634 "num_blocks": 65536, 00:11:54.634 "uuid": "be73a614-bd51-41b7-b0fb-e27776fa3753", 00:11:54.634 "assigned_rate_limits": { 00:11:54.634 "rw_ios_per_sec": 0, 00:11:54.634 "rw_mbytes_per_sec": 0, 00:11:54.634 "r_mbytes_per_sec": 0, 00:11:54.634 "w_mbytes_per_sec": 0 00:11:54.634 }, 00:11:54.634 "claimed": true, 00:11:54.634 "claim_type": "exclusive_write", 00:11:54.634 "zoned": false, 00:11:54.634 "supported_io_types": { 00:11:54.634 "read": true, 00:11:54.634 "write": true, 00:11:54.634 "unmap": true, 00:11:54.634 "flush": true, 00:11:54.634 "reset": true, 00:11:54.634 "nvme_admin": false, 00:11:54.634 "nvme_io": false, 00:11:54.634 "nvme_io_md": false, 00:11:54.634 "write_zeroes": true, 00:11:54.634 "zcopy": true, 00:11:54.634 "get_zone_info": false, 00:11:54.634 "zone_management": false, 00:11:54.634 "zone_append": false, 00:11:54.634 "compare": false, 00:11:54.634 "compare_and_write": false, 00:11:54.634 "abort": true, 00:11:54.634 "seek_hole": false, 00:11:54.634 "seek_data": false, 00:11:54.634 "copy": true, 00:11:54.634 "nvme_iov_md": false 00:11:54.634 }, 00:11:54.634 "memory_domains": [ 00:11:54.634 { 00:11:54.634 "dma_device_id": "system", 00:11:54.634 "dma_device_type": 1 00:11:54.634 }, 00:11:54.634 { 00:11:54.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.634 "dma_device_type": 2 00:11:54.634 } 00:11:54.634 ], 00:11:54.634 "driver_specific": {} 00:11:54.634 } 00:11:54.634 ] 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:54.892 "name": "Existed_Raid", 00:11:54.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.892 "strip_size_kb": 64, 00:11:54.892 "state": "configuring", 00:11:54.892 "raid_level": "raid0", 00:11:54.892 "superblock": false, 00:11:54.892 "num_base_bdevs": 3, 00:11:54.892 "num_base_bdevs_discovered": 2, 00:11:54.892 "num_base_bdevs_operational": 3, 00:11:54.892 "base_bdevs_list": [ 00:11:54.892 { 00:11:54.892 "name": "BaseBdev1", 00:11:54.892 "uuid": "dcca70bf-dc54-494e-afac-97fcd67b93ce", 00:11:54.892 "is_configured": true, 00:11:54.892 "data_offset": 0, 00:11:54.892 "data_size": 65536 00:11:54.892 }, 00:11:54.892 { 00:11:54.892 "name": "BaseBdev2", 00:11:54.892 "uuid": "be73a614-bd51-41b7-b0fb-e27776fa3753", 00:11:54.892 "is_configured": true, 00:11:54.892 "data_offset": 0, 00:11:54.892 "data_size": 65536 00:11:54.892 }, 00:11:54.892 { 00:11:54.892 "name": "BaseBdev3", 00:11:54.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.892 "is_configured": false, 00:11:54.892 "data_offset": 0, 00:11:54.892 "data_size": 0 00:11:54.892 } 00:11:54.892 ] 00:11:54.892 }' 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:54.892 10:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.457 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:55.715 [2024-07-25 10:26:59.377995] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.715 [2024-07-25 10:26:59.378048] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x9a7d90 00:11:55.715 [2024-07-25 10:26:59.378058] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:55.715 [2024-07-25 10:26:59.378316] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9aba90 00:11:55.715 [2024-07-25 10:26:59.378468] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9a7d90 00:11:55.715 [2024-07-25 10:26:59.378484] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x9a7d90 00:11:55.715 [2024-07-25 10:26:59.378701] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.715 BaseBdev3 00:11:55.715 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:55.715 10:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:55.715 10:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:55.715 10:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:55.715 10:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:55.715 10:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:55.715 10:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:55.973 10:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:56.231 [ 00:11:56.231 { 00:11:56.231 "name": "BaseBdev3", 00:11:56.231 "aliases": [ 00:11:56.231 "a53ecfb3-b16d-4e36-be99-94a57e3ee6de" 00:11:56.231 ], 00:11:56.231 "product_name": "Malloc disk", 00:11:56.231 "block_size": 512, 00:11:56.231 "num_blocks": 65536, 00:11:56.231 "uuid": "a53ecfb3-b16d-4e36-be99-94a57e3ee6de", 00:11:56.231 "assigned_rate_limits": { 00:11:56.231 "rw_ios_per_sec": 0, 00:11:56.231 "rw_mbytes_per_sec": 0, 00:11:56.231 "r_mbytes_per_sec": 0, 00:11:56.231 "w_mbytes_per_sec": 0 00:11:56.231 }, 00:11:56.231 "claimed": true, 00:11:56.231 "claim_type": "exclusive_write", 00:11:56.231 "zoned": false, 00:11:56.231 "supported_io_types": { 00:11:56.231 "read": true, 00:11:56.231 "write": true, 00:11:56.231 "unmap": true, 00:11:56.231 "flush": true, 00:11:56.231 "reset": true, 00:11:56.231 "nvme_admin": false, 00:11:56.231 "nvme_io": false, 00:11:56.231 "nvme_io_md": false, 00:11:56.231 "write_zeroes": true, 00:11:56.231 "zcopy": true, 00:11:56.231 "get_zone_info": false, 00:11:56.231 "zone_management": false, 00:11:56.231 "zone_append": false, 00:11:56.231 "compare": false, 00:11:56.231 "compare_and_write": false, 00:11:56.231 "abort": true, 00:11:56.231 "seek_hole": false, 00:11:56.231 "seek_data": false, 00:11:56.231 "copy": true, 00:11:56.231 "nvme_iov_md": false 00:11:56.231 }, 00:11:56.231 "memory_domains": [ 00:11:56.231 { 00:11:56.231 "dma_device_id": "system", 00:11:56.231 "dma_device_type": 1 00:11:56.231 }, 00:11:56.231 { 00:11:56.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.231 "dma_device_type": 2 00:11:56.231 } 00:11:56.231 ], 00:11:56.231 "driver_specific": {} 00:11:56.231 } 00:11:56.231 ] 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.231 10:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.489 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:56.489 "name": "Existed_Raid", 00:11:56.489 "uuid": "f6272142-0eb6-40a4-accb-a751510bb855", 00:11:56.489 "strip_size_kb": 64, 00:11:56.489 "state": "online", 00:11:56.489 "raid_level": "raid0", 00:11:56.489 "superblock": false, 00:11:56.489 "num_base_bdevs": 3, 00:11:56.489 "num_base_bdevs_discovered": 3, 00:11:56.489 "num_base_bdevs_operational": 3, 00:11:56.489 "base_bdevs_list": [ 00:11:56.489 { 00:11:56.489 "name": "BaseBdev1", 00:11:56.489 "uuid": "dcca70bf-dc54-494e-afac-97fcd67b93ce", 00:11:56.489 "is_configured": true, 00:11:56.489 "data_offset": 0, 00:11:56.489 "data_size": 65536 00:11:56.489 }, 00:11:56.489 { 00:11:56.489 "name": "BaseBdev2", 00:11:56.489 "uuid": "be73a614-bd51-41b7-b0fb-e27776fa3753", 00:11:56.489 "is_configured": true, 00:11:56.489 "data_offset": 0, 00:11:56.489 "data_size": 65536 00:11:56.489 }, 00:11:56.489 { 00:11:56.489 "name": "BaseBdev3", 00:11:56.490 "uuid": "a53ecfb3-b16d-4e36-be99-94a57e3ee6de", 00:11:56.490 "is_configured": true, 00:11:56.490 "data_offset": 0, 00:11:56.490 "data_size": 65536 00:11:56.490 } 00:11:56.490 ] 00:11:56.490 }' 00:11:56.490 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:56.490 10:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.054 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:57.054 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:57.054 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:57.054 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:57.054 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:57.054 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:57.054 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:57.054 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:57.312 [2024-07-25 10:27:00.886331] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.312 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:57.312 "name": "Existed_Raid", 00:11:57.312 "aliases": [ 00:11:57.312 "f6272142-0eb6-40a4-accb-a751510bb855" 00:11:57.312 ], 00:11:57.312 "product_name": "Raid Volume", 00:11:57.312 "block_size": 512, 00:11:57.312 "num_blocks": 196608, 00:11:57.312 "uuid": "f6272142-0eb6-40a4-accb-a751510bb855", 00:11:57.312 "assigned_rate_limits": { 00:11:57.312 "rw_ios_per_sec": 0, 00:11:57.312 "rw_mbytes_per_sec": 0, 00:11:57.312 "r_mbytes_per_sec": 0, 00:11:57.312 "w_mbytes_per_sec": 0 00:11:57.312 }, 00:11:57.312 "claimed": false, 00:11:57.312 "zoned": false, 00:11:57.312 "supported_io_types": { 00:11:57.312 "read": true, 00:11:57.312 "write": true, 00:11:57.312 "unmap": true, 00:11:57.312 "flush": true, 00:11:57.312 "reset": true, 00:11:57.312 "nvme_admin": false, 00:11:57.312 "nvme_io": false, 00:11:57.312 "nvme_io_md": false, 00:11:57.312 "write_zeroes": true, 00:11:57.312 "zcopy": false, 00:11:57.312 "get_zone_info": false, 00:11:57.312 "zone_management": false, 00:11:57.312 "zone_append": false, 00:11:57.312 "compare": false, 00:11:57.312 "compare_and_write": false, 00:11:57.312 "abort": false, 00:11:57.312 "seek_hole": false, 00:11:57.312 "seek_data": false, 00:11:57.312 "copy": false, 00:11:57.312 "nvme_iov_md": false 00:11:57.312 }, 00:11:57.312 "memory_domains": [ 00:11:57.312 { 00:11:57.312 "dma_device_id": "system", 00:11:57.312 "dma_device_type": 1 00:11:57.312 }, 00:11:57.312 { 00:11:57.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.312 "dma_device_type": 2 00:11:57.312 }, 00:11:57.312 { 00:11:57.312 "dma_device_id": "system", 00:11:57.312 "dma_device_type": 1 00:11:57.312 }, 00:11:57.312 { 00:11:57.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.312 "dma_device_type": 2 00:11:57.312 }, 00:11:57.312 { 00:11:57.312 "dma_device_id": "system", 00:11:57.312 "dma_device_type": 1 00:11:57.312 }, 00:11:57.312 { 00:11:57.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.312 "dma_device_type": 2 00:11:57.312 } 00:11:57.312 ], 00:11:57.312 "driver_specific": { 00:11:57.312 "raid": { 00:11:57.312 "uuid": "f6272142-0eb6-40a4-accb-a751510bb855", 00:11:57.312 "strip_size_kb": 64, 00:11:57.312 "state": "online", 00:11:57.312 "raid_level": "raid0", 00:11:57.312 "superblock": false, 00:11:57.312 "num_base_bdevs": 3, 00:11:57.312 "num_base_bdevs_discovered": 3, 00:11:57.312 "num_base_bdevs_operational": 3, 00:11:57.313 "base_bdevs_list": [ 00:11:57.313 { 00:11:57.313 "name": "BaseBdev1", 00:11:57.313 "uuid": "dcca70bf-dc54-494e-afac-97fcd67b93ce", 00:11:57.313 "is_configured": true, 00:11:57.313 "data_offset": 0, 00:11:57.313 "data_size": 65536 00:11:57.313 }, 00:11:57.313 { 00:11:57.313 "name": "BaseBdev2", 00:11:57.313 "uuid": "be73a614-bd51-41b7-b0fb-e27776fa3753", 00:11:57.313 "is_configured": true, 00:11:57.313 "data_offset": 0, 00:11:57.313 "data_size": 65536 00:11:57.313 }, 00:11:57.313 { 00:11:57.313 "name": "BaseBdev3", 00:11:57.313 "uuid": "a53ecfb3-b16d-4e36-be99-94a57e3ee6de", 00:11:57.313 "is_configured": true, 00:11:57.313 "data_offset": 0, 00:11:57.313 "data_size": 65536 00:11:57.313 } 00:11:57.313 ] 00:11:57.313 } 00:11:57.313 } 00:11:57.313 }' 00:11:57.313 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.313 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:57.313 BaseBdev2 00:11:57.313 BaseBdev3' 00:11:57.313 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:57.313 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:57.313 10:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:57.571 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:57.571 "name": "BaseBdev1", 00:11:57.571 "aliases": [ 00:11:57.571 "dcca70bf-dc54-494e-afac-97fcd67b93ce" 00:11:57.571 ], 00:11:57.571 "product_name": "Malloc disk", 00:11:57.571 "block_size": 512, 00:11:57.571 "num_blocks": 65536, 00:11:57.571 "uuid": "dcca70bf-dc54-494e-afac-97fcd67b93ce", 00:11:57.571 "assigned_rate_limits": { 00:11:57.571 "rw_ios_per_sec": 0, 00:11:57.571 "rw_mbytes_per_sec": 0, 00:11:57.571 "r_mbytes_per_sec": 0, 00:11:57.571 "w_mbytes_per_sec": 0 00:11:57.571 }, 00:11:57.571 "claimed": true, 00:11:57.571 "claim_type": "exclusive_write", 00:11:57.571 "zoned": false, 00:11:57.571 "supported_io_types": { 00:11:57.571 "read": true, 00:11:57.571 "write": true, 00:11:57.571 "unmap": true, 00:11:57.571 "flush": true, 00:11:57.571 "reset": true, 00:11:57.571 "nvme_admin": false, 00:11:57.571 "nvme_io": false, 00:11:57.571 "nvme_io_md": false, 00:11:57.571 "write_zeroes": true, 00:11:57.571 "zcopy": true, 00:11:57.571 "get_zone_info": false, 00:11:57.571 "zone_management": false, 00:11:57.571 "zone_append": false, 00:11:57.571 "compare": false, 00:11:57.571 "compare_and_write": false, 00:11:57.571 "abort": true, 00:11:57.571 "seek_hole": false, 00:11:57.571 "seek_data": false, 00:11:57.571 "copy": true, 00:11:57.571 "nvme_iov_md": false 00:11:57.571 }, 00:11:57.571 "memory_domains": [ 00:11:57.571 { 00:11:57.571 "dma_device_id": "system", 00:11:57.571 "dma_device_type": 1 00:11:57.571 }, 00:11:57.571 { 00:11:57.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.571 "dma_device_type": 2 00:11:57.571 } 00:11:57.571 ], 00:11:57.571 "driver_specific": {} 00:11:57.571 }' 00:11:57.571 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.571 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.571 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:57.571 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.830 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.830 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:57.830 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.830 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.830 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:57.830 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.830 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.830 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:57.830 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:57.830 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:57.830 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:58.088 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:58.088 "name": "BaseBdev2", 00:11:58.088 "aliases": [ 00:11:58.088 "be73a614-bd51-41b7-b0fb-e27776fa3753" 00:11:58.088 ], 00:11:58.088 "product_name": "Malloc disk", 00:11:58.088 "block_size": 512, 00:11:58.088 "num_blocks": 65536, 00:11:58.088 "uuid": "be73a614-bd51-41b7-b0fb-e27776fa3753", 00:11:58.088 "assigned_rate_limits": { 00:11:58.088 "rw_ios_per_sec": 0, 00:11:58.088 "rw_mbytes_per_sec": 0, 00:11:58.088 "r_mbytes_per_sec": 0, 00:11:58.088 "w_mbytes_per_sec": 0 00:11:58.088 }, 00:11:58.088 "claimed": true, 00:11:58.088 "claim_type": "exclusive_write", 00:11:58.088 "zoned": false, 00:11:58.088 "supported_io_types": { 00:11:58.088 "read": true, 00:11:58.088 "write": true, 00:11:58.088 "unmap": true, 00:11:58.088 "flush": true, 00:11:58.088 "reset": true, 00:11:58.088 "nvme_admin": false, 00:11:58.088 "nvme_io": false, 00:11:58.088 "nvme_io_md": false, 00:11:58.088 "write_zeroes": true, 00:11:58.088 "zcopy": true, 00:11:58.088 "get_zone_info": false, 00:11:58.088 "zone_management": false, 00:11:58.088 "zone_append": false, 00:11:58.088 "compare": false, 00:11:58.088 "compare_and_write": false, 00:11:58.088 "abort": true, 00:11:58.088 "seek_hole": false, 00:11:58.088 "seek_data": false, 00:11:58.088 "copy": true, 00:11:58.088 "nvme_iov_md": false 00:11:58.088 }, 00:11:58.088 "memory_domains": [ 00:11:58.088 { 00:11:58.088 "dma_device_id": "system", 00:11:58.088 "dma_device_type": 1 00:11:58.088 }, 00:11:58.088 { 00:11:58.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.088 "dma_device_type": 2 00:11:58.088 } 00:11:58.088 ], 00:11:58.088 "driver_specific": {} 00:11:58.088 }' 00:11:58.088 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:58.088 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:58.088 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:58.088 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:58.346 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:58.346 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:58.346 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:58.346 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:58.346 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:58.346 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:58.346 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:58.346 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:58.346 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:58.346 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:58.346 10:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:58.603 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:58.603 "name": "BaseBdev3", 00:11:58.603 "aliases": [ 00:11:58.603 "a53ecfb3-b16d-4e36-be99-94a57e3ee6de" 00:11:58.603 ], 00:11:58.603 "product_name": "Malloc disk", 00:11:58.603 "block_size": 512, 00:11:58.603 "num_blocks": 65536, 00:11:58.604 "uuid": "a53ecfb3-b16d-4e36-be99-94a57e3ee6de", 00:11:58.604 "assigned_rate_limits": { 00:11:58.604 "rw_ios_per_sec": 0, 00:11:58.604 "rw_mbytes_per_sec": 0, 00:11:58.604 "r_mbytes_per_sec": 0, 00:11:58.604 "w_mbytes_per_sec": 0 00:11:58.604 }, 00:11:58.604 "claimed": true, 00:11:58.604 "claim_type": "exclusive_write", 00:11:58.604 "zoned": false, 00:11:58.604 "supported_io_types": { 00:11:58.604 "read": true, 00:11:58.604 "write": true, 00:11:58.604 "unmap": true, 00:11:58.604 "flush": true, 00:11:58.604 "reset": true, 00:11:58.604 "nvme_admin": false, 00:11:58.604 "nvme_io": false, 00:11:58.604 "nvme_io_md": false, 00:11:58.604 "write_zeroes": true, 00:11:58.604 "zcopy": true, 00:11:58.604 "get_zone_info": false, 00:11:58.604 "zone_management": false, 00:11:58.604 "zone_append": false, 00:11:58.604 "compare": false, 00:11:58.604 "compare_and_write": false, 00:11:58.604 "abort": true, 00:11:58.604 "seek_hole": false, 00:11:58.604 "seek_data": false, 00:11:58.604 "copy": true, 00:11:58.604 "nvme_iov_md": false 00:11:58.604 }, 00:11:58.604 "memory_domains": [ 00:11:58.604 { 00:11:58.604 "dma_device_id": "system", 00:11:58.604 "dma_device_type": 1 00:11:58.604 }, 00:11:58.604 { 00:11:58.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.604 "dma_device_type": 2 00:11:58.604 } 00:11:58.604 ], 00:11:58.604 "driver_specific": {} 00:11:58.604 }' 00:11:58.604 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:58.604 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:58.604 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:58.604 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:58.861 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:58.861 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:58.861 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:58.861 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:58.861 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:58.861 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:58.861 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:58.861 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:58.861 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:59.119 [2024-07-25 10:27:02.727068] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.119 [2024-07-25 10:27:02.727096] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.119 [2024-07-25 10:27:02.727154] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.119 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.377 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:59.377 "name": "Existed_Raid", 00:11:59.377 "uuid": "f6272142-0eb6-40a4-accb-a751510bb855", 00:11:59.377 "strip_size_kb": 64, 00:11:59.377 "state": "offline", 00:11:59.377 "raid_level": "raid0", 00:11:59.377 "superblock": false, 00:11:59.377 "num_base_bdevs": 3, 00:11:59.377 "num_base_bdevs_discovered": 2, 00:11:59.377 "num_base_bdevs_operational": 2, 00:11:59.377 "base_bdevs_list": [ 00:11:59.377 { 00:11:59.377 "name": null, 00:11:59.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.377 "is_configured": false, 00:11:59.377 "data_offset": 0, 00:11:59.377 "data_size": 65536 00:11:59.377 }, 00:11:59.377 { 00:11:59.377 "name": "BaseBdev2", 00:11:59.377 "uuid": "be73a614-bd51-41b7-b0fb-e27776fa3753", 00:11:59.377 "is_configured": true, 00:11:59.377 "data_offset": 0, 00:11:59.377 "data_size": 65536 00:11:59.377 }, 00:11:59.377 { 00:11:59.377 "name": "BaseBdev3", 00:11:59.377 "uuid": "a53ecfb3-b16d-4e36-be99-94a57e3ee6de", 00:11:59.377 "is_configured": true, 00:11:59.377 "data_offset": 0, 00:11:59.377 "data_size": 65536 00:11:59.377 } 00:11:59.377 ] 00:11:59.377 }' 00:11:59.377 10:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:59.377 10:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.942 10:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:59.942 10:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:59.942 10:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.942 10:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:00.200 10:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:00.200 10:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.200 10:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:00.457 [2024-07-25 10:27:04.033814] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:00.457 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:00.457 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:00.457 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.457 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:00.716 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:00.716 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.716 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:00.974 [2024-07-25 10:27:04.552734] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.974 [2024-07-25 10:27:04.552794] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a7d90 name Existed_Raid, state offline 00:12:00.974 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:00.974 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:00.974 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.974 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:01.233 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:01.233 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:01.233 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:01.233 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:01.233 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:01.233 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:01.491 BaseBdev2 00:12:01.491 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:01.491 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:01.491 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:01.491 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:01.491 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:01.491 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:01.491 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:01.750 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:02.008 [ 00:12:02.008 { 00:12:02.008 "name": "BaseBdev2", 00:12:02.008 "aliases": [ 00:12:02.008 "7c11f82a-a49c-43b8-a10e-de06c05b2798" 00:12:02.008 ], 00:12:02.008 "product_name": "Malloc disk", 00:12:02.008 "block_size": 512, 00:12:02.008 "num_blocks": 65536, 00:12:02.008 "uuid": "7c11f82a-a49c-43b8-a10e-de06c05b2798", 00:12:02.008 "assigned_rate_limits": { 00:12:02.008 "rw_ios_per_sec": 0, 00:12:02.008 "rw_mbytes_per_sec": 0, 00:12:02.008 "r_mbytes_per_sec": 0, 00:12:02.008 "w_mbytes_per_sec": 0 00:12:02.008 }, 00:12:02.008 "claimed": false, 00:12:02.008 "zoned": false, 00:12:02.008 "supported_io_types": { 00:12:02.008 "read": true, 00:12:02.008 "write": true, 00:12:02.008 "unmap": true, 00:12:02.008 "flush": true, 00:12:02.008 "reset": true, 00:12:02.008 "nvme_admin": false, 00:12:02.008 "nvme_io": false, 00:12:02.008 "nvme_io_md": false, 00:12:02.008 "write_zeroes": true, 00:12:02.008 "zcopy": true, 00:12:02.008 "get_zone_info": false, 00:12:02.008 "zone_management": false, 00:12:02.008 "zone_append": false, 00:12:02.008 "compare": false, 00:12:02.008 "compare_and_write": false, 00:12:02.008 "abort": true, 00:12:02.008 "seek_hole": false, 00:12:02.008 "seek_data": false, 00:12:02.008 "copy": true, 00:12:02.008 "nvme_iov_md": false 00:12:02.008 }, 00:12:02.008 "memory_domains": [ 00:12:02.008 { 00:12:02.008 "dma_device_id": "system", 00:12:02.008 "dma_device_type": 1 00:12:02.008 }, 00:12:02.008 { 00:12:02.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.008 "dma_device_type": 2 00:12:02.008 } 00:12:02.008 ], 00:12:02.008 "driver_specific": {} 00:12:02.008 } 00:12:02.008 ] 00:12:02.008 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:02.008 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:02.008 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:02.008 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:02.267 BaseBdev3 00:12:02.267 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:02.267 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:02.267 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:02.267 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:02.267 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:02.267 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:02.267 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:02.524 10:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:02.782 [ 00:12:02.782 { 00:12:02.782 "name": "BaseBdev3", 00:12:02.782 "aliases": [ 00:12:02.782 "6cf72c8e-c7db-4a56-bbf1-0ce18be8a0cf" 00:12:02.782 ], 00:12:02.782 "product_name": "Malloc disk", 00:12:02.782 "block_size": 512, 00:12:02.782 "num_blocks": 65536, 00:12:02.782 "uuid": "6cf72c8e-c7db-4a56-bbf1-0ce18be8a0cf", 00:12:02.782 "assigned_rate_limits": { 00:12:02.782 "rw_ios_per_sec": 0, 00:12:02.782 "rw_mbytes_per_sec": 0, 00:12:02.782 "r_mbytes_per_sec": 0, 00:12:02.782 "w_mbytes_per_sec": 0 00:12:02.782 }, 00:12:02.782 "claimed": false, 00:12:02.782 "zoned": false, 00:12:02.782 "supported_io_types": { 00:12:02.782 "read": true, 00:12:02.782 "write": true, 00:12:02.782 "unmap": true, 00:12:02.782 "flush": true, 00:12:02.782 "reset": true, 00:12:02.782 "nvme_admin": false, 00:12:02.782 "nvme_io": false, 00:12:02.782 "nvme_io_md": false, 00:12:02.782 "write_zeroes": true, 00:12:02.782 "zcopy": true, 00:12:02.782 "get_zone_info": false, 00:12:02.782 "zone_management": false, 00:12:02.782 "zone_append": false, 00:12:02.782 "compare": false, 00:12:02.782 "compare_and_write": false, 00:12:02.782 "abort": true, 00:12:02.782 "seek_hole": false, 00:12:02.782 "seek_data": false, 00:12:02.782 "copy": true, 00:12:02.782 "nvme_iov_md": false 00:12:02.782 }, 00:12:02.782 "memory_domains": [ 00:12:02.782 { 00:12:02.782 "dma_device_id": "system", 00:12:02.782 "dma_device_type": 1 00:12:02.782 }, 00:12:02.782 { 00:12:02.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.782 "dma_device_type": 2 00:12:02.782 } 00:12:02.782 ], 00:12:02.782 "driver_specific": {} 00:12:02.782 } 00:12:02.782 ] 00:12:02.782 10:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:02.782 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:02.782 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:02.782 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:03.040 [2024-07-25 10:27:06.533796] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.040 [2024-07-25 10:27:06.533838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.040 [2024-07-25 10:27:06.533878] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.040 [2024-07-25 10:27:06.535175] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.040 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:03.040 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:03.040 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:03.040 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:03.040 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:03.040 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:03.040 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:03.040 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:03.040 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:03.040 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:03.040 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.040 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.298 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:03.298 "name": "Existed_Raid", 00:12:03.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.298 "strip_size_kb": 64, 00:12:03.298 "state": "configuring", 00:12:03.298 "raid_level": "raid0", 00:12:03.298 "superblock": false, 00:12:03.298 "num_base_bdevs": 3, 00:12:03.298 "num_base_bdevs_discovered": 2, 00:12:03.298 "num_base_bdevs_operational": 3, 00:12:03.298 "base_bdevs_list": [ 00:12:03.298 { 00:12:03.298 "name": "BaseBdev1", 00:12:03.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.298 "is_configured": false, 00:12:03.298 "data_offset": 0, 00:12:03.298 "data_size": 0 00:12:03.298 }, 00:12:03.298 { 00:12:03.298 "name": "BaseBdev2", 00:12:03.298 "uuid": "7c11f82a-a49c-43b8-a10e-de06c05b2798", 00:12:03.298 "is_configured": true, 00:12:03.298 "data_offset": 0, 00:12:03.298 "data_size": 65536 00:12:03.298 }, 00:12:03.298 { 00:12:03.298 "name": "BaseBdev3", 00:12:03.298 "uuid": "6cf72c8e-c7db-4a56-bbf1-0ce18be8a0cf", 00:12:03.298 "is_configured": true, 00:12:03.298 "data_offset": 0, 00:12:03.298 "data_size": 65536 00:12:03.298 } 00:12:03.298 ] 00:12:03.298 }' 00:12:03.298 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:03.298 10:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.864 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:04.122 [2024-07-25 10:27:07.580561] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:04.122 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:04.122 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:04.122 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:04.122 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:04.122 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:04.122 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:04.122 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:04.122 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:04.122 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:04.122 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:04.122 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.122 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.380 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:04.380 "name": "Existed_Raid", 00:12:04.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.380 "strip_size_kb": 64, 00:12:04.380 "state": "configuring", 00:12:04.380 "raid_level": "raid0", 00:12:04.380 "superblock": false, 00:12:04.380 "num_base_bdevs": 3, 00:12:04.380 "num_base_bdevs_discovered": 1, 00:12:04.380 "num_base_bdevs_operational": 3, 00:12:04.380 "base_bdevs_list": [ 00:12:04.380 { 00:12:04.380 "name": "BaseBdev1", 00:12:04.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.380 "is_configured": false, 00:12:04.380 "data_offset": 0, 00:12:04.380 "data_size": 0 00:12:04.380 }, 00:12:04.380 { 00:12:04.380 "name": null, 00:12:04.380 "uuid": "7c11f82a-a49c-43b8-a10e-de06c05b2798", 00:12:04.380 "is_configured": false, 00:12:04.380 "data_offset": 0, 00:12:04.380 "data_size": 65536 00:12:04.380 }, 00:12:04.380 { 00:12:04.380 "name": "BaseBdev3", 00:12:04.380 "uuid": "6cf72c8e-c7db-4a56-bbf1-0ce18be8a0cf", 00:12:04.380 "is_configured": true, 00:12:04.380 "data_offset": 0, 00:12:04.380 "data_size": 65536 00:12:04.380 } 00:12:04.380 ] 00:12:04.380 }' 00:12:04.380 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:04.380 10:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.973 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.973 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:04.973 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:04.973 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:05.231 [2024-07-25 10:27:08.918374] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.231 BaseBdev1 00:12:05.231 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:05.231 10:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:05.231 10:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:05.231 10:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:05.231 10:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:05.231 10:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:05.231 10:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:05.489 10:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:05.747 [ 00:12:05.747 { 00:12:05.747 "name": "BaseBdev1", 00:12:05.747 "aliases": [ 00:12:05.747 "3f29a6e0-567e-4aa0-9002-664239b22f08" 00:12:05.747 ], 00:12:05.747 "product_name": "Malloc disk", 00:12:05.747 "block_size": 512, 00:12:05.747 "num_blocks": 65536, 00:12:05.747 "uuid": "3f29a6e0-567e-4aa0-9002-664239b22f08", 00:12:05.747 "assigned_rate_limits": { 00:12:05.747 "rw_ios_per_sec": 0, 00:12:05.747 "rw_mbytes_per_sec": 0, 00:12:05.747 "r_mbytes_per_sec": 0, 00:12:05.747 "w_mbytes_per_sec": 0 00:12:05.747 }, 00:12:05.747 "claimed": true, 00:12:05.747 "claim_type": "exclusive_write", 00:12:05.747 "zoned": false, 00:12:05.747 "supported_io_types": { 00:12:05.747 "read": true, 00:12:05.747 "write": true, 00:12:05.747 "unmap": true, 00:12:05.747 "flush": true, 00:12:05.747 "reset": true, 00:12:05.747 "nvme_admin": false, 00:12:05.747 "nvme_io": false, 00:12:05.747 "nvme_io_md": false, 00:12:05.747 "write_zeroes": true, 00:12:05.747 "zcopy": true, 00:12:05.747 "get_zone_info": false, 00:12:05.747 "zone_management": false, 00:12:05.747 "zone_append": false, 00:12:05.747 "compare": false, 00:12:05.747 "compare_and_write": false, 00:12:05.747 "abort": true, 00:12:05.747 "seek_hole": false, 00:12:05.747 "seek_data": false, 00:12:05.747 "copy": true, 00:12:05.747 "nvme_iov_md": false 00:12:05.747 }, 00:12:05.747 "memory_domains": [ 00:12:05.747 { 00:12:05.747 "dma_device_id": "system", 00:12:05.747 "dma_device_type": 1 00:12:05.747 }, 00:12:05.747 { 00:12:05.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.747 "dma_device_type": 2 00:12:05.747 } 00:12:05.747 ], 00:12:05.747 "driver_specific": {} 00:12:05.747 } 00:12:05.747 ] 00:12:05.747 10:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:05.747 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:05.747 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:05.747 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:05.747 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:05.747 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:05.747 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:05.747 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:05.747 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:05.747 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:05.747 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:05.747 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.747 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.314 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:06.314 "name": "Existed_Raid", 00:12:06.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.314 "strip_size_kb": 64, 00:12:06.314 "state": "configuring", 00:12:06.314 "raid_level": "raid0", 00:12:06.314 "superblock": false, 00:12:06.314 "num_base_bdevs": 3, 00:12:06.314 "num_base_bdevs_discovered": 2, 00:12:06.314 "num_base_bdevs_operational": 3, 00:12:06.314 "base_bdevs_list": [ 00:12:06.314 { 00:12:06.314 "name": "BaseBdev1", 00:12:06.314 "uuid": "3f29a6e0-567e-4aa0-9002-664239b22f08", 00:12:06.314 "is_configured": true, 00:12:06.314 "data_offset": 0, 00:12:06.314 "data_size": 65536 00:12:06.314 }, 00:12:06.314 { 00:12:06.314 "name": null, 00:12:06.314 "uuid": "7c11f82a-a49c-43b8-a10e-de06c05b2798", 00:12:06.314 "is_configured": false, 00:12:06.314 "data_offset": 0, 00:12:06.314 "data_size": 65536 00:12:06.314 }, 00:12:06.314 { 00:12:06.314 "name": "BaseBdev3", 00:12:06.314 "uuid": "6cf72c8e-c7db-4a56-bbf1-0ce18be8a0cf", 00:12:06.314 "is_configured": true, 00:12:06.314 "data_offset": 0, 00:12:06.314 "data_size": 65536 00:12:06.314 } 00:12:06.314 ] 00:12:06.314 }' 00:12:06.314 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:06.314 10:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.880 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.880 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:06.880 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:06.880 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:07.138 [2024-07-25 10:27:10.751277] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:07.138 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:07.138 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:07.138 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:07.138 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:07.138 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:07.138 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:07.138 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:07.138 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:07.138 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:07.138 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:07.138 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.138 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.396 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:07.396 "name": "Existed_Raid", 00:12:07.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.396 "strip_size_kb": 64, 00:12:07.396 "state": "configuring", 00:12:07.396 "raid_level": "raid0", 00:12:07.396 "superblock": false, 00:12:07.396 "num_base_bdevs": 3, 00:12:07.396 "num_base_bdevs_discovered": 1, 00:12:07.396 "num_base_bdevs_operational": 3, 00:12:07.396 "base_bdevs_list": [ 00:12:07.396 { 00:12:07.396 "name": "BaseBdev1", 00:12:07.396 "uuid": "3f29a6e0-567e-4aa0-9002-664239b22f08", 00:12:07.396 "is_configured": true, 00:12:07.396 "data_offset": 0, 00:12:07.396 "data_size": 65536 00:12:07.396 }, 00:12:07.396 { 00:12:07.396 "name": null, 00:12:07.396 "uuid": "7c11f82a-a49c-43b8-a10e-de06c05b2798", 00:12:07.396 "is_configured": false, 00:12:07.396 "data_offset": 0, 00:12:07.396 "data_size": 65536 00:12:07.396 }, 00:12:07.396 { 00:12:07.396 "name": null, 00:12:07.396 "uuid": "6cf72c8e-c7db-4a56-bbf1-0ce18be8a0cf", 00:12:07.396 "is_configured": false, 00:12:07.396 "data_offset": 0, 00:12:07.396 "data_size": 65536 00:12:07.396 } 00:12:07.396 ] 00:12:07.396 }' 00:12:07.396 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:07.396 10:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.962 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.962 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.220 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:08.220 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:08.478 [2024-07-25 10:27:12.030718] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.478 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:08.478 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:08.478 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:08.478 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:08.478 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:08.478 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:08.478 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:08.478 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:08.478 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:08.478 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:08.478 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.478 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.735 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:08.735 "name": "Existed_Raid", 00:12:08.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.735 "strip_size_kb": 64, 00:12:08.735 "state": "configuring", 00:12:08.735 "raid_level": "raid0", 00:12:08.735 "superblock": false, 00:12:08.735 "num_base_bdevs": 3, 00:12:08.735 "num_base_bdevs_discovered": 2, 00:12:08.735 "num_base_bdevs_operational": 3, 00:12:08.735 "base_bdevs_list": [ 00:12:08.735 { 00:12:08.735 "name": "BaseBdev1", 00:12:08.735 "uuid": "3f29a6e0-567e-4aa0-9002-664239b22f08", 00:12:08.735 "is_configured": true, 00:12:08.735 "data_offset": 0, 00:12:08.735 "data_size": 65536 00:12:08.735 }, 00:12:08.735 { 00:12:08.735 "name": null, 00:12:08.735 "uuid": "7c11f82a-a49c-43b8-a10e-de06c05b2798", 00:12:08.735 "is_configured": false, 00:12:08.735 "data_offset": 0, 00:12:08.735 "data_size": 65536 00:12:08.735 }, 00:12:08.735 { 00:12:08.735 "name": "BaseBdev3", 00:12:08.735 "uuid": "6cf72c8e-c7db-4a56-bbf1-0ce18be8a0cf", 00:12:08.735 "is_configured": true, 00:12:08.735 "data_offset": 0, 00:12:08.735 "data_size": 65536 00:12:08.735 } 00:12:08.735 ] 00:12:08.735 }' 00:12:08.735 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:08.735 10:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.298 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.298 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:09.555 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:09.555 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:09.815 [2024-07-25 10:27:13.310158] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:09.815 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:09.815 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:09.815 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:09.815 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:09.815 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:09.815 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:09.815 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:09.815 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:09.815 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:09.815 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:09.815 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.815 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.074 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:10.075 "name": "Existed_Raid", 00:12:10.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.075 "strip_size_kb": 64, 00:12:10.075 "state": "configuring", 00:12:10.075 "raid_level": "raid0", 00:12:10.075 "superblock": false, 00:12:10.075 "num_base_bdevs": 3, 00:12:10.075 "num_base_bdevs_discovered": 1, 00:12:10.075 "num_base_bdevs_operational": 3, 00:12:10.075 "base_bdevs_list": [ 00:12:10.075 { 00:12:10.075 "name": null, 00:12:10.075 "uuid": "3f29a6e0-567e-4aa0-9002-664239b22f08", 00:12:10.075 "is_configured": false, 00:12:10.075 "data_offset": 0, 00:12:10.075 "data_size": 65536 00:12:10.075 }, 00:12:10.075 { 00:12:10.075 "name": null, 00:12:10.075 "uuid": "7c11f82a-a49c-43b8-a10e-de06c05b2798", 00:12:10.075 "is_configured": false, 00:12:10.075 "data_offset": 0, 00:12:10.075 "data_size": 65536 00:12:10.075 }, 00:12:10.075 { 00:12:10.075 "name": "BaseBdev3", 00:12:10.075 "uuid": "6cf72c8e-c7db-4a56-bbf1-0ce18be8a0cf", 00:12:10.075 "is_configured": true, 00:12:10.075 "data_offset": 0, 00:12:10.075 "data_size": 65536 00:12:10.075 } 00:12:10.075 ] 00:12:10.075 }' 00:12:10.075 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:10.075 10:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.640 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.640 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:10.899 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:10.899 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:11.157 [2024-07-25 10:27:14.633907] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.157 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:11.157 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:11.157 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:11.157 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:11.157 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:11.157 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:11.157 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:11.157 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:11.157 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:11.157 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:11.157 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.157 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.415 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:11.415 "name": "Existed_Raid", 00:12:11.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.415 "strip_size_kb": 64, 00:12:11.415 "state": "configuring", 00:12:11.415 "raid_level": "raid0", 00:12:11.415 "superblock": false, 00:12:11.415 "num_base_bdevs": 3, 00:12:11.415 "num_base_bdevs_discovered": 2, 00:12:11.415 "num_base_bdevs_operational": 3, 00:12:11.415 "base_bdevs_list": [ 00:12:11.415 { 00:12:11.415 "name": null, 00:12:11.415 "uuid": "3f29a6e0-567e-4aa0-9002-664239b22f08", 00:12:11.415 "is_configured": false, 00:12:11.415 "data_offset": 0, 00:12:11.415 "data_size": 65536 00:12:11.415 }, 00:12:11.415 { 00:12:11.415 "name": "BaseBdev2", 00:12:11.415 "uuid": "7c11f82a-a49c-43b8-a10e-de06c05b2798", 00:12:11.415 "is_configured": true, 00:12:11.415 "data_offset": 0, 00:12:11.415 "data_size": 65536 00:12:11.415 }, 00:12:11.415 { 00:12:11.415 "name": "BaseBdev3", 00:12:11.415 "uuid": "6cf72c8e-c7db-4a56-bbf1-0ce18be8a0cf", 00:12:11.415 "is_configured": true, 00:12:11.415 "data_offset": 0, 00:12:11.415 "data_size": 65536 00:12:11.415 } 00:12:11.415 ] 00:12:11.415 }' 00:12:11.415 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:11.415 10:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.982 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.982 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:12.240 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:12.240 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.240 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:12.240 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 3f29a6e0-567e-4aa0-9002-664239b22f08 00:12:12.499 [2024-07-25 10:27:16.178862] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:12.499 [2024-07-25 10:27:16.178907] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x9a8d00 00:12:12.499 [2024-07-25 10:27:16.178915] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:12.499 [2024-07-25 10:27:16.179072] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9a7ca0 00:12:12.499 [2024-07-25 10:27:16.179210] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9a8d00 00:12:12.499 [2024-07-25 10:27:16.179223] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x9a8d00 00:12:12.499 [2024-07-25 10:27:16.179414] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.499 NewBaseBdev 00:12:12.499 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:12.499 10:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:12.499 10:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:12.499 10:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:12.499 10:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:12.499 10:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:12.499 10:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:12.758 10:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:13.022 [ 00:12:13.022 { 00:12:13.022 "name": "NewBaseBdev", 00:12:13.023 "aliases": [ 00:12:13.023 "3f29a6e0-567e-4aa0-9002-664239b22f08" 00:12:13.023 ], 00:12:13.023 "product_name": "Malloc disk", 00:12:13.023 "block_size": 512, 00:12:13.023 "num_blocks": 65536, 00:12:13.023 "uuid": "3f29a6e0-567e-4aa0-9002-664239b22f08", 00:12:13.023 "assigned_rate_limits": { 00:12:13.023 "rw_ios_per_sec": 0, 00:12:13.023 "rw_mbytes_per_sec": 0, 00:12:13.023 "r_mbytes_per_sec": 0, 00:12:13.023 "w_mbytes_per_sec": 0 00:12:13.023 }, 00:12:13.023 "claimed": true, 00:12:13.023 "claim_type": "exclusive_write", 00:12:13.023 "zoned": false, 00:12:13.023 "supported_io_types": { 00:12:13.023 "read": true, 00:12:13.023 "write": true, 00:12:13.023 "unmap": true, 00:12:13.023 "flush": true, 00:12:13.023 "reset": true, 00:12:13.023 "nvme_admin": false, 00:12:13.023 "nvme_io": false, 00:12:13.023 "nvme_io_md": false, 00:12:13.023 "write_zeroes": true, 00:12:13.023 "zcopy": true, 00:12:13.023 "get_zone_info": false, 00:12:13.023 "zone_management": false, 00:12:13.023 "zone_append": false, 00:12:13.023 "compare": false, 00:12:13.023 "compare_and_write": false, 00:12:13.023 "abort": true, 00:12:13.023 "seek_hole": false, 00:12:13.023 "seek_data": false, 00:12:13.023 "copy": true, 00:12:13.023 "nvme_iov_md": false 00:12:13.023 }, 00:12:13.023 "memory_domains": [ 00:12:13.023 { 00:12:13.023 "dma_device_id": "system", 00:12:13.023 "dma_device_type": 1 00:12:13.023 }, 00:12:13.023 { 00:12:13.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.023 "dma_device_type": 2 00:12:13.023 } 00:12:13.023 ], 00:12:13.023 "driver_specific": {} 00:12:13.023 } 00:12:13.023 ] 00:12:13.023 10:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:13.023 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:13.023 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:13.023 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:13.023 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:13.023 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:13.023 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:13.023 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:13.023 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:13.023 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:13.023 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:13.023 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.023 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.282 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:13.282 "name": "Existed_Raid", 00:12:13.282 "uuid": "ef70b5d7-86e2-4cb5-b05d-b271118237f5", 00:12:13.282 "strip_size_kb": 64, 00:12:13.282 "state": "online", 00:12:13.282 "raid_level": "raid0", 00:12:13.282 "superblock": false, 00:12:13.282 "num_base_bdevs": 3, 00:12:13.282 "num_base_bdevs_discovered": 3, 00:12:13.282 "num_base_bdevs_operational": 3, 00:12:13.282 "base_bdevs_list": [ 00:12:13.282 { 00:12:13.282 "name": "NewBaseBdev", 00:12:13.282 "uuid": "3f29a6e0-567e-4aa0-9002-664239b22f08", 00:12:13.282 "is_configured": true, 00:12:13.282 "data_offset": 0, 00:12:13.282 "data_size": 65536 00:12:13.282 }, 00:12:13.282 { 00:12:13.282 "name": "BaseBdev2", 00:12:13.282 "uuid": "7c11f82a-a49c-43b8-a10e-de06c05b2798", 00:12:13.282 "is_configured": true, 00:12:13.282 "data_offset": 0, 00:12:13.282 "data_size": 65536 00:12:13.282 }, 00:12:13.282 { 00:12:13.282 "name": "BaseBdev3", 00:12:13.282 "uuid": "6cf72c8e-c7db-4a56-bbf1-0ce18be8a0cf", 00:12:13.282 "is_configured": true, 00:12:13.283 "data_offset": 0, 00:12:13.283 "data_size": 65536 00:12:13.283 } 00:12:13.283 ] 00:12:13.283 }' 00:12:13.283 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:13.283 10:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.848 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:13.848 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:13.848 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:13.848 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:13.848 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:13.848 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:13.848 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:13.848 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:14.107 [2024-07-25 10:27:17.634994] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.107 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:14.107 "name": "Existed_Raid", 00:12:14.107 "aliases": [ 00:12:14.107 "ef70b5d7-86e2-4cb5-b05d-b271118237f5" 00:12:14.107 ], 00:12:14.107 "product_name": "Raid Volume", 00:12:14.107 "block_size": 512, 00:12:14.107 "num_blocks": 196608, 00:12:14.107 "uuid": "ef70b5d7-86e2-4cb5-b05d-b271118237f5", 00:12:14.107 "assigned_rate_limits": { 00:12:14.107 "rw_ios_per_sec": 0, 00:12:14.107 "rw_mbytes_per_sec": 0, 00:12:14.107 "r_mbytes_per_sec": 0, 00:12:14.108 "w_mbytes_per_sec": 0 00:12:14.108 }, 00:12:14.108 "claimed": false, 00:12:14.108 "zoned": false, 00:12:14.108 "supported_io_types": { 00:12:14.108 "read": true, 00:12:14.108 "write": true, 00:12:14.108 "unmap": true, 00:12:14.108 "flush": true, 00:12:14.108 "reset": true, 00:12:14.108 "nvme_admin": false, 00:12:14.108 "nvme_io": false, 00:12:14.108 "nvme_io_md": false, 00:12:14.108 "write_zeroes": true, 00:12:14.108 "zcopy": false, 00:12:14.108 "get_zone_info": false, 00:12:14.108 "zone_management": false, 00:12:14.108 "zone_append": false, 00:12:14.108 "compare": false, 00:12:14.108 "compare_and_write": false, 00:12:14.108 "abort": false, 00:12:14.108 "seek_hole": false, 00:12:14.108 "seek_data": false, 00:12:14.108 "copy": false, 00:12:14.108 "nvme_iov_md": false 00:12:14.108 }, 00:12:14.108 "memory_domains": [ 00:12:14.108 { 00:12:14.108 "dma_device_id": "system", 00:12:14.108 "dma_device_type": 1 00:12:14.108 }, 00:12:14.108 { 00:12:14.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.108 "dma_device_type": 2 00:12:14.108 }, 00:12:14.108 { 00:12:14.108 "dma_device_id": "system", 00:12:14.108 "dma_device_type": 1 00:12:14.108 }, 00:12:14.108 { 00:12:14.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.108 "dma_device_type": 2 00:12:14.108 }, 00:12:14.108 { 00:12:14.108 "dma_device_id": "system", 00:12:14.108 "dma_device_type": 1 00:12:14.108 }, 00:12:14.108 { 00:12:14.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.108 "dma_device_type": 2 00:12:14.108 } 00:12:14.108 ], 00:12:14.108 "driver_specific": { 00:12:14.108 "raid": { 00:12:14.108 "uuid": "ef70b5d7-86e2-4cb5-b05d-b271118237f5", 00:12:14.108 "strip_size_kb": 64, 00:12:14.108 "state": "online", 00:12:14.108 "raid_level": "raid0", 00:12:14.108 "superblock": false, 00:12:14.108 "num_base_bdevs": 3, 00:12:14.108 "num_base_bdevs_discovered": 3, 00:12:14.108 "num_base_bdevs_operational": 3, 00:12:14.108 "base_bdevs_list": [ 00:12:14.108 { 00:12:14.108 "name": "NewBaseBdev", 00:12:14.108 "uuid": "3f29a6e0-567e-4aa0-9002-664239b22f08", 00:12:14.108 "is_configured": true, 00:12:14.108 "data_offset": 0, 00:12:14.108 "data_size": 65536 00:12:14.108 }, 00:12:14.108 { 00:12:14.108 "name": "BaseBdev2", 00:12:14.108 "uuid": "7c11f82a-a49c-43b8-a10e-de06c05b2798", 00:12:14.108 "is_configured": true, 00:12:14.108 "data_offset": 0, 00:12:14.108 "data_size": 65536 00:12:14.108 }, 00:12:14.108 { 00:12:14.108 "name": "BaseBdev3", 00:12:14.108 "uuid": "6cf72c8e-c7db-4a56-bbf1-0ce18be8a0cf", 00:12:14.108 "is_configured": true, 00:12:14.108 "data_offset": 0, 00:12:14.108 "data_size": 65536 00:12:14.108 } 00:12:14.108 ] 00:12:14.108 } 00:12:14.108 } 00:12:14.108 }' 00:12:14.108 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.108 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:14.108 BaseBdev2 00:12:14.108 BaseBdev3' 00:12:14.108 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:14.108 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:14.108 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:14.367 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:14.367 "name": "NewBaseBdev", 00:12:14.367 "aliases": [ 00:12:14.367 "3f29a6e0-567e-4aa0-9002-664239b22f08" 00:12:14.367 ], 00:12:14.367 "product_name": "Malloc disk", 00:12:14.367 "block_size": 512, 00:12:14.367 "num_blocks": 65536, 00:12:14.367 "uuid": "3f29a6e0-567e-4aa0-9002-664239b22f08", 00:12:14.367 "assigned_rate_limits": { 00:12:14.367 "rw_ios_per_sec": 0, 00:12:14.367 "rw_mbytes_per_sec": 0, 00:12:14.367 "r_mbytes_per_sec": 0, 00:12:14.367 "w_mbytes_per_sec": 0 00:12:14.367 }, 00:12:14.367 "claimed": true, 00:12:14.367 "claim_type": "exclusive_write", 00:12:14.367 "zoned": false, 00:12:14.367 "supported_io_types": { 00:12:14.367 "read": true, 00:12:14.367 "write": true, 00:12:14.367 "unmap": true, 00:12:14.367 "flush": true, 00:12:14.367 "reset": true, 00:12:14.367 "nvme_admin": false, 00:12:14.367 "nvme_io": false, 00:12:14.367 "nvme_io_md": false, 00:12:14.367 "write_zeroes": true, 00:12:14.367 "zcopy": true, 00:12:14.367 "get_zone_info": false, 00:12:14.367 "zone_management": false, 00:12:14.367 "zone_append": false, 00:12:14.367 "compare": false, 00:12:14.367 "compare_and_write": false, 00:12:14.367 "abort": true, 00:12:14.367 "seek_hole": false, 00:12:14.367 "seek_data": false, 00:12:14.367 "copy": true, 00:12:14.367 "nvme_iov_md": false 00:12:14.367 }, 00:12:14.367 "memory_domains": [ 00:12:14.367 { 00:12:14.367 "dma_device_id": "system", 00:12:14.367 "dma_device_type": 1 00:12:14.367 }, 00:12:14.367 { 00:12:14.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.367 "dma_device_type": 2 00:12:14.367 } 00:12:14.367 ], 00:12:14.367 "driver_specific": {} 00:12:14.367 }' 00:12:14.367 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.367 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.367 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:14.367 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.367 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.367 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:14.367 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.626 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.626 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:14.626 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.626 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.626 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:14.626 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:14.626 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:14.626 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:14.885 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:14.885 "name": "BaseBdev2", 00:12:14.885 "aliases": [ 00:12:14.885 "7c11f82a-a49c-43b8-a10e-de06c05b2798" 00:12:14.885 ], 00:12:14.885 "product_name": "Malloc disk", 00:12:14.885 "block_size": 512, 00:12:14.885 "num_blocks": 65536, 00:12:14.885 "uuid": "7c11f82a-a49c-43b8-a10e-de06c05b2798", 00:12:14.885 "assigned_rate_limits": { 00:12:14.885 "rw_ios_per_sec": 0, 00:12:14.885 "rw_mbytes_per_sec": 0, 00:12:14.885 "r_mbytes_per_sec": 0, 00:12:14.885 "w_mbytes_per_sec": 0 00:12:14.885 }, 00:12:14.885 "claimed": true, 00:12:14.885 "claim_type": "exclusive_write", 00:12:14.885 "zoned": false, 00:12:14.885 "supported_io_types": { 00:12:14.885 "read": true, 00:12:14.885 "write": true, 00:12:14.885 "unmap": true, 00:12:14.885 "flush": true, 00:12:14.885 "reset": true, 00:12:14.885 "nvme_admin": false, 00:12:14.885 "nvme_io": false, 00:12:14.885 "nvme_io_md": false, 00:12:14.885 "write_zeroes": true, 00:12:14.885 "zcopy": true, 00:12:14.885 "get_zone_info": false, 00:12:14.885 "zone_management": false, 00:12:14.885 "zone_append": false, 00:12:14.885 "compare": false, 00:12:14.885 "compare_and_write": false, 00:12:14.885 "abort": true, 00:12:14.885 "seek_hole": false, 00:12:14.885 "seek_data": false, 00:12:14.885 "copy": true, 00:12:14.885 "nvme_iov_md": false 00:12:14.885 }, 00:12:14.885 "memory_domains": [ 00:12:14.885 { 00:12:14.885 "dma_device_id": "system", 00:12:14.885 "dma_device_type": 1 00:12:14.885 }, 00:12:14.885 { 00:12:14.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.885 "dma_device_type": 2 00:12:14.885 } 00:12:14.885 ], 00:12:14.885 "driver_specific": {} 00:12:14.885 }' 00:12:14.885 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.885 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.885 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:14.885 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.885 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.143 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:15.143 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.143 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.143 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:15.143 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.143 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.143 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:15.143 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:15.143 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:15.143 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:15.401 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:15.401 "name": "BaseBdev3", 00:12:15.401 "aliases": [ 00:12:15.401 "6cf72c8e-c7db-4a56-bbf1-0ce18be8a0cf" 00:12:15.401 ], 00:12:15.401 "product_name": "Malloc disk", 00:12:15.401 "block_size": 512, 00:12:15.401 "num_blocks": 65536, 00:12:15.401 "uuid": "6cf72c8e-c7db-4a56-bbf1-0ce18be8a0cf", 00:12:15.401 "assigned_rate_limits": { 00:12:15.401 "rw_ios_per_sec": 0, 00:12:15.401 "rw_mbytes_per_sec": 0, 00:12:15.401 "r_mbytes_per_sec": 0, 00:12:15.401 "w_mbytes_per_sec": 0 00:12:15.401 }, 00:12:15.401 "claimed": true, 00:12:15.401 "claim_type": "exclusive_write", 00:12:15.401 "zoned": false, 00:12:15.401 "supported_io_types": { 00:12:15.401 "read": true, 00:12:15.401 "write": true, 00:12:15.401 "unmap": true, 00:12:15.401 "flush": true, 00:12:15.401 "reset": true, 00:12:15.401 "nvme_admin": false, 00:12:15.401 "nvme_io": false, 00:12:15.401 "nvme_io_md": false, 00:12:15.401 "write_zeroes": true, 00:12:15.401 "zcopy": true, 00:12:15.401 "get_zone_info": false, 00:12:15.401 "zone_management": false, 00:12:15.401 "zone_append": false, 00:12:15.401 "compare": false, 00:12:15.401 "compare_and_write": false, 00:12:15.401 "abort": true, 00:12:15.401 "seek_hole": false, 00:12:15.401 "seek_data": false, 00:12:15.401 "copy": true, 00:12:15.401 "nvme_iov_md": false 00:12:15.401 }, 00:12:15.401 "memory_domains": [ 00:12:15.401 { 00:12:15.401 "dma_device_id": "system", 00:12:15.401 "dma_device_type": 1 00:12:15.401 }, 00:12:15.401 { 00:12:15.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.401 "dma_device_type": 2 00:12:15.401 } 00:12:15.401 ], 00:12:15.401 "driver_specific": {} 00:12:15.401 }' 00:12:15.401 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.401 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.401 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:15.401 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.401 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.659 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:15.659 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.659 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.659 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:15.659 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.659 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.659 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:15.659 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:15.917 [2024-07-25 10:27:19.535810] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:15.917 [2024-07-25 10:27:19.535837] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.917 [2024-07-25 10:27:19.535916] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.917 [2024-07-25 10:27:19.535983] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.918 [2024-07-25 10:27:19.535998] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a8d00 name Existed_Raid, state offline 00:12:15.918 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2351087 00:12:15.918 10:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2351087 ']' 00:12:15.918 10:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2351087 00:12:15.918 10:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:15.918 10:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:15.918 10:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2351087 00:12:15.918 10:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:15.918 10:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:15.918 10:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2351087' 00:12:15.918 killing process with pid 2351087 00:12:15.918 10:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2351087 00:12:15.918 [2024-07-25 10:27:19.588008] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.918 10:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2351087 00:12:15.918 [2024-07-25 10:27:19.625061] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:16.483 00:12:16.483 real 0m27.103s 00:12:16.483 user 0m50.808s 00:12:16.483 sys 0m3.804s 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.483 ************************************ 00:12:16.483 END TEST raid_state_function_test 00:12:16.483 ************************************ 00:12:16.483 10:27:19 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:12:16.483 10:27:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:16.483 10:27:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.483 10:27:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.483 ************************************ 00:12:16.483 START TEST raid_state_function_test_sb 00:12:16.483 ************************************ 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:16.483 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2355508 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2355508' 00:12:16.484 Process raid pid: 2355508 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2355508 /var/tmp/spdk-raid.sock 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2355508 ']' 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:16.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.484 10:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.484 [2024-07-25 10:27:20.007797] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:12:16.484 [2024-07-25 10:27:20.007903] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.484 [2024-07-25 10:27:20.090352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.742 [2024-07-25 10:27:20.204138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.742 [2024-07-25 10:27:20.277026] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.742 [2024-07-25 10:27:20.277072] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.317 10:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:17.317 10:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:17.317 10:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:17.575 [2024-07-25 10:27:21.258207] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.575 [2024-07-25 10:27:21.258253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.575 [2024-07-25 10:27:21.258266] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.575 [2024-07-25 10:27:21.258280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.575 [2024-07-25 10:27:21.258289] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:17.575 [2024-07-25 10:27:21.258302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:17.575 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:17.575 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:17.575 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:17.575 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:17.575 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:17.575 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:17.575 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:17.575 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:17.575 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:17.575 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:17.575 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.575 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.833 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:17.833 "name": "Existed_Raid", 00:12:17.833 "uuid": "b8a2d80f-01c5-4f02-b153-535cf076bb7a", 00:12:17.833 "strip_size_kb": 64, 00:12:17.833 "state": "configuring", 00:12:17.833 "raid_level": "raid0", 00:12:17.833 "superblock": true, 00:12:17.833 "num_base_bdevs": 3, 00:12:17.833 "num_base_bdevs_discovered": 0, 00:12:17.833 "num_base_bdevs_operational": 3, 00:12:17.833 "base_bdevs_list": [ 00:12:17.833 { 00:12:17.833 "name": "BaseBdev1", 00:12:17.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.833 "is_configured": false, 00:12:17.833 "data_offset": 0, 00:12:17.833 "data_size": 0 00:12:17.833 }, 00:12:17.833 { 00:12:17.833 "name": "BaseBdev2", 00:12:17.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.833 "is_configured": false, 00:12:17.833 "data_offset": 0, 00:12:17.833 "data_size": 0 00:12:17.833 }, 00:12:17.833 { 00:12:17.833 "name": "BaseBdev3", 00:12:17.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.833 "is_configured": false, 00:12:17.833 "data_offset": 0, 00:12:17.833 "data_size": 0 00:12:17.833 } 00:12:17.833 ] 00:12:17.833 }' 00:12:17.833 10:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:17.833 10:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.399 10:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:18.656 [2024-07-25 10:27:22.324870] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:18.656 [2024-07-25 10:27:22.324908] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10d6620 name Existed_Raid, state configuring 00:12:18.656 10:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:18.914 [2024-07-25 10:27:22.565548] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:18.914 [2024-07-25 10:27:22.565583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:18.914 [2024-07-25 10:27:22.565595] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:18.914 [2024-07-25 10:27:22.565609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:18.914 [2024-07-25 10:27:22.565618] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:18.914 [2024-07-25 10:27:22.565631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:18.914 10:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:19.172 [2024-07-25 10:27:22.829725] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.172 BaseBdev1 00:12:19.172 10:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:19.172 10:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:19.172 10:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:19.172 10:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:19.172 10:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:19.173 10:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:19.173 10:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:19.430 10:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:19.688 [ 00:12:19.688 { 00:12:19.688 "name": "BaseBdev1", 00:12:19.688 "aliases": [ 00:12:19.688 "0f8bb251-453e-46c5-9d5b-6f832a96fbac" 00:12:19.688 ], 00:12:19.688 "product_name": "Malloc disk", 00:12:19.688 "block_size": 512, 00:12:19.688 "num_blocks": 65536, 00:12:19.688 "uuid": "0f8bb251-453e-46c5-9d5b-6f832a96fbac", 00:12:19.688 "assigned_rate_limits": { 00:12:19.688 "rw_ios_per_sec": 0, 00:12:19.688 "rw_mbytes_per_sec": 0, 00:12:19.688 "r_mbytes_per_sec": 0, 00:12:19.688 "w_mbytes_per_sec": 0 00:12:19.688 }, 00:12:19.688 "claimed": true, 00:12:19.688 "claim_type": "exclusive_write", 00:12:19.688 "zoned": false, 00:12:19.688 "supported_io_types": { 00:12:19.688 "read": true, 00:12:19.688 "write": true, 00:12:19.688 "unmap": true, 00:12:19.688 "flush": true, 00:12:19.688 "reset": true, 00:12:19.688 "nvme_admin": false, 00:12:19.688 "nvme_io": false, 00:12:19.688 "nvme_io_md": false, 00:12:19.688 "write_zeroes": true, 00:12:19.688 "zcopy": true, 00:12:19.688 "get_zone_info": false, 00:12:19.688 "zone_management": false, 00:12:19.689 "zone_append": false, 00:12:19.689 "compare": false, 00:12:19.689 "compare_and_write": false, 00:12:19.689 "abort": true, 00:12:19.689 "seek_hole": false, 00:12:19.689 "seek_data": false, 00:12:19.689 "copy": true, 00:12:19.689 "nvme_iov_md": false 00:12:19.689 }, 00:12:19.689 "memory_domains": [ 00:12:19.689 { 00:12:19.689 "dma_device_id": "system", 00:12:19.689 "dma_device_type": 1 00:12:19.689 }, 00:12:19.689 { 00:12:19.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.689 "dma_device_type": 2 00:12:19.689 } 00:12:19.689 ], 00:12:19.689 "driver_specific": {} 00:12:19.689 } 00:12:19.689 ] 00:12:19.689 10:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:19.689 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:19.689 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:19.689 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:19.689 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:19.689 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:19.689 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:19.689 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:19.689 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:19.689 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:19.689 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:19.689 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:19.689 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.946 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:19.946 "name": "Existed_Raid", 00:12:19.946 "uuid": "d823c698-48cb-4d68-a11a-5094e574e94b", 00:12:19.946 "strip_size_kb": 64, 00:12:19.946 "state": "configuring", 00:12:19.946 "raid_level": "raid0", 00:12:19.946 "superblock": true, 00:12:19.946 "num_base_bdevs": 3, 00:12:19.946 "num_base_bdevs_discovered": 1, 00:12:19.946 "num_base_bdevs_operational": 3, 00:12:19.946 "base_bdevs_list": [ 00:12:19.946 { 00:12:19.946 "name": "BaseBdev1", 00:12:19.946 "uuid": "0f8bb251-453e-46c5-9d5b-6f832a96fbac", 00:12:19.946 "is_configured": true, 00:12:19.946 "data_offset": 2048, 00:12:19.946 "data_size": 63488 00:12:19.946 }, 00:12:19.946 { 00:12:19.946 "name": "BaseBdev2", 00:12:19.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.946 "is_configured": false, 00:12:19.946 "data_offset": 0, 00:12:19.946 "data_size": 0 00:12:19.946 }, 00:12:19.946 { 00:12:19.946 "name": "BaseBdev3", 00:12:19.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.946 "is_configured": false, 00:12:19.946 "data_offset": 0, 00:12:19.946 "data_size": 0 00:12:19.946 } 00:12:19.946 ] 00:12:19.946 }' 00:12:19.946 10:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:19.946 10:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.511 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:20.769 [2024-07-25 10:27:24.389832] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.769 [2024-07-25 10:27:24.389889] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10d5e50 name Existed_Raid, state configuring 00:12:20.769 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:21.027 [2024-07-25 10:27:24.634510] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.027 [2024-07-25 10:27:24.636003] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.027 [2024-07-25 10:27:24.636036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:21.027 [2024-07-25 10:27:24.636049] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:21.027 [2024-07-25 10:27:24.636063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.027 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.285 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:21.285 "name": "Existed_Raid", 00:12:21.285 "uuid": "9eadd107-a64f-4f4d-b4ab-aa764a4cf627", 00:12:21.285 "strip_size_kb": 64, 00:12:21.285 "state": "configuring", 00:12:21.285 "raid_level": "raid0", 00:12:21.285 "superblock": true, 00:12:21.285 "num_base_bdevs": 3, 00:12:21.285 "num_base_bdevs_discovered": 1, 00:12:21.285 "num_base_bdevs_operational": 3, 00:12:21.285 "base_bdevs_list": [ 00:12:21.285 { 00:12:21.285 "name": "BaseBdev1", 00:12:21.286 "uuid": "0f8bb251-453e-46c5-9d5b-6f832a96fbac", 00:12:21.286 "is_configured": true, 00:12:21.286 "data_offset": 2048, 00:12:21.286 "data_size": 63488 00:12:21.286 }, 00:12:21.286 { 00:12:21.286 "name": "BaseBdev2", 00:12:21.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.286 "is_configured": false, 00:12:21.286 "data_offset": 0, 00:12:21.286 "data_size": 0 00:12:21.286 }, 00:12:21.286 { 00:12:21.286 "name": "BaseBdev3", 00:12:21.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.286 "is_configured": false, 00:12:21.286 "data_offset": 0, 00:12:21.286 "data_size": 0 00:12:21.286 } 00:12:21.286 ] 00:12:21.286 }' 00:12:21.286 10:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:21.286 10:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.886 10:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:22.146 [2024-07-25 10:27:25.695536] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.146 BaseBdev2 00:12:22.146 10:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:22.146 10:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:22.146 10:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:22.146 10:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:22.146 10:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:22.146 10:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:22.146 10:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:22.404 10:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:22.663 [ 00:12:22.663 { 00:12:22.663 "name": "BaseBdev2", 00:12:22.663 "aliases": [ 00:12:22.663 "f88f2fd4-2840-4d51-bc7a-adfaf753b404" 00:12:22.663 ], 00:12:22.663 "product_name": "Malloc disk", 00:12:22.663 "block_size": 512, 00:12:22.663 "num_blocks": 65536, 00:12:22.663 "uuid": "f88f2fd4-2840-4d51-bc7a-adfaf753b404", 00:12:22.663 "assigned_rate_limits": { 00:12:22.663 "rw_ios_per_sec": 0, 00:12:22.663 "rw_mbytes_per_sec": 0, 00:12:22.663 "r_mbytes_per_sec": 0, 00:12:22.663 "w_mbytes_per_sec": 0 00:12:22.663 }, 00:12:22.663 "claimed": true, 00:12:22.663 "claim_type": "exclusive_write", 00:12:22.663 "zoned": false, 00:12:22.663 "supported_io_types": { 00:12:22.663 "read": true, 00:12:22.663 "write": true, 00:12:22.663 "unmap": true, 00:12:22.663 "flush": true, 00:12:22.663 "reset": true, 00:12:22.663 "nvme_admin": false, 00:12:22.663 "nvme_io": false, 00:12:22.663 "nvme_io_md": false, 00:12:22.663 "write_zeroes": true, 00:12:22.663 "zcopy": true, 00:12:22.663 "get_zone_info": false, 00:12:22.663 "zone_management": false, 00:12:22.663 "zone_append": false, 00:12:22.663 "compare": false, 00:12:22.663 "compare_and_write": false, 00:12:22.663 "abort": true, 00:12:22.663 "seek_hole": false, 00:12:22.663 "seek_data": false, 00:12:22.663 "copy": true, 00:12:22.663 "nvme_iov_md": false 00:12:22.663 }, 00:12:22.663 "memory_domains": [ 00:12:22.663 { 00:12:22.663 "dma_device_id": "system", 00:12:22.663 "dma_device_type": 1 00:12:22.663 }, 00:12:22.663 { 00:12:22.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.663 "dma_device_type": 2 00:12:22.663 } 00:12:22.663 ], 00:12:22.663 "driver_specific": {} 00:12:22.663 } 00:12:22.663 ] 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:22.663 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.921 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:22.921 "name": "Existed_Raid", 00:12:22.921 "uuid": "9eadd107-a64f-4f4d-b4ab-aa764a4cf627", 00:12:22.921 "strip_size_kb": 64, 00:12:22.921 "state": "configuring", 00:12:22.921 "raid_level": "raid0", 00:12:22.921 "superblock": true, 00:12:22.921 "num_base_bdevs": 3, 00:12:22.922 "num_base_bdevs_discovered": 2, 00:12:22.922 "num_base_bdevs_operational": 3, 00:12:22.922 "base_bdevs_list": [ 00:12:22.922 { 00:12:22.922 "name": "BaseBdev1", 00:12:22.922 "uuid": "0f8bb251-453e-46c5-9d5b-6f832a96fbac", 00:12:22.922 "is_configured": true, 00:12:22.922 "data_offset": 2048, 00:12:22.922 "data_size": 63488 00:12:22.922 }, 00:12:22.922 { 00:12:22.922 "name": "BaseBdev2", 00:12:22.922 "uuid": "f88f2fd4-2840-4d51-bc7a-adfaf753b404", 00:12:22.922 "is_configured": true, 00:12:22.922 "data_offset": 2048, 00:12:22.922 "data_size": 63488 00:12:22.922 }, 00:12:22.922 { 00:12:22.922 "name": "BaseBdev3", 00:12:22.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.922 "is_configured": false, 00:12:22.922 "data_offset": 0, 00:12:22.922 "data_size": 0 00:12:22.922 } 00:12:22.922 ] 00:12:22.922 }' 00:12:22.922 10:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:22.922 10:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.488 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:23.746 [2024-07-25 10:27:27.241748] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.746 [2024-07-25 10:27:27.241997] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x10d6d90 00:12:23.746 [2024-07-25 10:27:27.242017] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:23.746 [2024-07-25 10:27:27.242201] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10daa90 00:12:23.746 [2024-07-25 10:27:27.242357] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10d6d90 00:12:23.746 [2024-07-25 10:27:27.242374] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10d6d90 00:12:23.746 [2024-07-25 10:27:27.242488] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.746 BaseBdev3 00:12:23.746 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:23.746 10:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:23.746 10:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:23.746 10:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:23.747 10:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:23.747 10:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:23.747 10:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:24.004 10:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:24.263 [ 00:12:24.263 { 00:12:24.263 "name": "BaseBdev3", 00:12:24.263 "aliases": [ 00:12:24.263 "712156a9-069e-45d1-93ef-7452239cd2fd" 00:12:24.263 ], 00:12:24.263 "product_name": "Malloc disk", 00:12:24.263 "block_size": 512, 00:12:24.263 "num_blocks": 65536, 00:12:24.263 "uuid": "712156a9-069e-45d1-93ef-7452239cd2fd", 00:12:24.263 "assigned_rate_limits": { 00:12:24.263 "rw_ios_per_sec": 0, 00:12:24.263 "rw_mbytes_per_sec": 0, 00:12:24.263 "r_mbytes_per_sec": 0, 00:12:24.263 "w_mbytes_per_sec": 0 00:12:24.263 }, 00:12:24.263 "claimed": true, 00:12:24.263 "claim_type": "exclusive_write", 00:12:24.263 "zoned": false, 00:12:24.263 "supported_io_types": { 00:12:24.263 "read": true, 00:12:24.263 "write": true, 00:12:24.263 "unmap": true, 00:12:24.263 "flush": true, 00:12:24.263 "reset": true, 00:12:24.263 "nvme_admin": false, 00:12:24.263 "nvme_io": false, 00:12:24.263 "nvme_io_md": false, 00:12:24.263 "write_zeroes": true, 00:12:24.263 "zcopy": true, 00:12:24.263 "get_zone_info": false, 00:12:24.263 "zone_management": false, 00:12:24.263 "zone_append": false, 00:12:24.263 "compare": false, 00:12:24.263 "compare_and_write": false, 00:12:24.263 "abort": true, 00:12:24.263 "seek_hole": false, 00:12:24.263 "seek_data": false, 00:12:24.263 "copy": true, 00:12:24.263 "nvme_iov_md": false 00:12:24.263 }, 00:12:24.263 "memory_domains": [ 00:12:24.263 { 00:12:24.263 "dma_device_id": "system", 00:12:24.263 "dma_device_type": 1 00:12:24.263 }, 00:12:24.263 { 00:12:24.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.263 "dma_device_type": 2 00:12:24.263 } 00:12:24.263 ], 00:12:24.263 "driver_specific": {} 00:12:24.263 } 00:12:24.263 ] 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.263 10:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.521 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.521 "name": "Existed_Raid", 00:12:24.521 "uuid": "9eadd107-a64f-4f4d-b4ab-aa764a4cf627", 00:12:24.521 "strip_size_kb": 64, 00:12:24.521 "state": "online", 00:12:24.521 "raid_level": "raid0", 00:12:24.521 "superblock": true, 00:12:24.521 "num_base_bdevs": 3, 00:12:24.521 "num_base_bdevs_discovered": 3, 00:12:24.521 "num_base_bdevs_operational": 3, 00:12:24.521 "base_bdevs_list": [ 00:12:24.522 { 00:12:24.522 "name": "BaseBdev1", 00:12:24.522 "uuid": "0f8bb251-453e-46c5-9d5b-6f832a96fbac", 00:12:24.522 "is_configured": true, 00:12:24.522 "data_offset": 2048, 00:12:24.522 "data_size": 63488 00:12:24.522 }, 00:12:24.522 { 00:12:24.522 "name": "BaseBdev2", 00:12:24.522 "uuid": "f88f2fd4-2840-4d51-bc7a-adfaf753b404", 00:12:24.522 "is_configured": true, 00:12:24.522 "data_offset": 2048, 00:12:24.522 "data_size": 63488 00:12:24.522 }, 00:12:24.522 { 00:12:24.522 "name": "BaseBdev3", 00:12:24.522 "uuid": "712156a9-069e-45d1-93ef-7452239cd2fd", 00:12:24.522 "is_configured": true, 00:12:24.522 "data_offset": 2048, 00:12:24.522 "data_size": 63488 00:12:24.522 } 00:12:24.522 ] 00:12:24.522 }' 00:12:24.522 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.522 10:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.087 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:25.087 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:25.087 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:25.087 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:25.087 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:25.087 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:25.087 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:25.087 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:25.087 [2024-07-25 10:27:28.782216] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.346 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:25.346 "name": "Existed_Raid", 00:12:25.346 "aliases": [ 00:12:25.346 "9eadd107-a64f-4f4d-b4ab-aa764a4cf627" 00:12:25.346 ], 00:12:25.346 "product_name": "Raid Volume", 00:12:25.346 "block_size": 512, 00:12:25.346 "num_blocks": 190464, 00:12:25.346 "uuid": "9eadd107-a64f-4f4d-b4ab-aa764a4cf627", 00:12:25.346 "assigned_rate_limits": { 00:12:25.346 "rw_ios_per_sec": 0, 00:12:25.346 "rw_mbytes_per_sec": 0, 00:12:25.346 "r_mbytes_per_sec": 0, 00:12:25.346 "w_mbytes_per_sec": 0 00:12:25.346 }, 00:12:25.346 "claimed": false, 00:12:25.346 "zoned": false, 00:12:25.346 "supported_io_types": { 00:12:25.346 "read": true, 00:12:25.346 "write": true, 00:12:25.346 "unmap": true, 00:12:25.346 "flush": true, 00:12:25.346 "reset": true, 00:12:25.346 "nvme_admin": false, 00:12:25.346 "nvme_io": false, 00:12:25.346 "nvme_io_md": false, 00:12:25.346 "write_zeroes": true, 00:12:25.346 "zcopy": false, 00:12:25.346 "get_zone_info": false, 00:12:25.346 "zone_management": false, 00:12:25.346 "zone_append": false, 00:12:25.346 "compare": false, 00:12:25.346 "compare_and_write": false, 00:12:25.346 "abort": false, 00:12:25.346 "seek_hole": false, 00:12:25.346 "seek_data": false, 00:12:25.346 "copy": false, 00:12:25.346 "nvme_iov_md": false 00:12:25.346 }, 00:12:25.346 "memory_domains": [ 00:12:25.346 { 00:12:25.346 "dma_device_id": "system", 00:12:25.346 "dma_device_type": 1 00:12:25.346 }, 00:12:25.346 { 00:12:25.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.346 "dma_device_type": 2 00:12:25.346 }, 00:12:25.346 { 00:12:25.346 "dma_device_id": "system", 00:12:25.346 "dma_device_type": 1 00:12:25.346 }, 00:12:25.346 { 00:12:25.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.346 "dma_device_type": 2 00:12:25.346 }, 00:12:25.346 { 00:12:25.346 "dma_device_id": "system", 00:12:25.346 "dma_device_type": 1 00:12:25.346 }, 00:12:25.346 { 00:12:25.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.346 "dma_device_type": 2 00:12:25.346 } 00:12:25.346 ], 00:12:25.346 "driver_specific": { 00:12:25.346 "raid": { 00:12:25.346 "uuid": "9eadd107-a64f-4f4d-b4ab-aa764a4cf627", 00:12:25.346 "strip_size_kb": 64, 00:12:25.346 "state": "online", 00:12:25.346 "raid_level": "raid0", 00:12:25.346 "superblock": true, 00:12:25.346 "num_base_bdevs": 3, 00:12:25.346 "num_base_bdevs_discovered": 3, 00:12:25.346 "num_base_bdevs_operational": 3, 00:12:25.346 "base_bdevs_list": [ 00:12:25.346 { 00:12:25.346 "name": "BaseBdev1", 00:12:25.346 "uuid": "0f8bb251-453e-46c5-9d5b-6f832a96fbac", 00:12:25.346 "is_configured": true, 00:12:25.346 "data_offset": 2048, 00:12:25.346 "data_size": 63488 00:12:25.346 }, 00:12:25.346 { 00:12:25.346 "name": "BaseBdev2", 00:12:25.346 "uuid": "f88f2fd4-2840-4d51-bc7a-adfaf753b404", 00:12:25.346 "is_configured": true, 00:12:25.346 "data_offset": 2048, 00:12:25.346 "data_size": 63488 00:12:25.346 }, 00:12:25.346 { 00:12:25.346 "name": "BaseBdev3", 00:12:25.346 "uuid": "712156a9-069e-45d1-93ef-7452239cd2fd", 00:12:25.346 "is_configured": true, 00:12:25.346 "data_offset": 2048, 00:12:25.346 "data_size": 63488 00:12:25.346 } 00:12:25.346 ] 00:12:25.346 } 00:12:25.346 } 00:12:25.346 }' 00:12:25.346 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.346 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:25.346 BaseBdev2 00:12:25.346 BaseBdev3' 00:12:25.346 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:25.346 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:25.346 10:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:25.604 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:25.605 "name": "BaseBdev1", 00:12:25.605 "aliases": [ 00:12:25.605 "0f8bb251-453e-46c5-9d5b-6f832a96fbac" 00:12:25.605 ], 00:12:25.605 "product_name": "Malloc disk", 00:12:25.605 "block_size": 512, 00:12:25.605 "num_blocks": 65536, 00:12:25.605 "uuid": "0f8bb251-453e-46c5-9d5b-6f832a96fbac", 00:12:25.605 "assigned_rate_limits": { 00:12:25.605 "rw_ios_per_sec": 0, 00:12:25.605 "rw_mbytes_per_sec": 0, 00:12:25.605 "r_mbytes_per_sec": 0, 00:12:25.605 "w_mbytes_per_sec": 0 00:12:25.605 }, 00:12:25.605 "claimed": true, 00:12:25.605 "claim_type": "exclusive_write", 00:12:25.605 "zoned": false, 00:12:25.605 "supported_io_types": { 00:12:25.605 "read": true, 00:12:25.605 "write": true, 00:12:25.605 "unmap": true, 00:12:25.605 "flush": true, 00:12:25.605 "reset": true, 00:12:25.605 "nvme_admin": false, 00:12:25.605 "nvme_io": false, 00:12:25.605 "nvme_io_md": false, 00:12:25.605 "write_zeroes": true, 00:12:25.605 "zcopy": true, 00:12:25.605 "get_zone_info": false, 00:12:25.605 "zone_management": false, 00:12:25.605 "zone_append": false, 00:12:25.605 "compare": false, 00:12:25.605 "compare_and_write": false, 00:12:25.605 "abort": true, 00:12:25.605 "seek_hole": false, 00:12:25.605 "seek_data": false, 00:12:25.605 "copy": true, 00:12:25.605 "nvme_iov_md": false 00:12:25.605 }, 00:12:25.605 "memory_domains": [ 00:12:25.605 { 00:12:25.605 "dma_device_id": "system", 00:12:25.605 "dma_device_type": 1 00:12:25.605 }, 00:12:25.605 { 00:12:25.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.605 "dma_device_type": 2 00:12:25.605 } 00:12:25.605 ], 00:12:25.605 "driver_specific": {} 00:12:25.605 }' 00:12:25.605 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:25.605 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:25.605 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:25.605 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:25.605 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:25.605 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:25.605 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:25.863 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:25.863 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:25.863 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:25.863 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:25.863 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:25.863 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:25.863 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:25.863 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:26.122 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:26.122 "name": "BaseBdev2", 00:12:26.122 "aliases": [ 00:12:26.122 "f88f2fd4-2840-4d51-bc7a-adfaf753b404" 00:12:26.122 ], 00:12:26.122 "product_name": "Malloc disk", 00:12:26.122 "block_size": 512, 00:12:26.122 "num_blocks": 65536, 00:12:26.122 "uuid": "f88f2fd4-2840-4d51-bc7a-adfaf753b404", 00:12:26.122 "assigned_rate_limits": { 00:12:26.122 "rw_ios_per_sec": 0, 00:12:26.122 "rw_mbytes_per_sec": 0, 00:12:26.122 "r_mbytes_per_sec": 0, 00:12:26.122 "w_mbytes_per_sec": 0 00:12:26.122 }, 00:12:26.122 "claimed": true, 00:12:26.122 "claim_type": "exclusive_write", 00:12:26.122 "zoned": false, 00:12:26.122 "supported_io_types": { 00:12:26.122 "read": true, 00:12:26.122 "write": true, 00:12:26.122 "unmap": true, 00:12:26.122 "flush": true, 00:12:26.122 "reset": true, 00:12:26.122 "nvme_admin": false, 00:12:26.122 "nvme_io": false, 00:12:26.122 "nvme_io_md": false, 00:12:26.122 "write_zeroes": true, 00:12:26.122 "zcopy": true, 00:12:26.122 "get_zone_info": false, 00:12:26.122 "zone_management": false, 00:12:26.122 "zone_append": false, 00:12:26.122 "compare": false, 00:12:26.122 "compare_and_write": false, 00:12:26.122 "abort": true, 00:12:26.122 "seek_hole": false, 00:12:26.122 "seek_data": false, 00:12:26.122 "copy": true, 00:12:26.122 "nvme_iov_md": false 00:12:26.122 }, 00:12:26.122 "memory_domains": [ 00:12:26.122 { 00:12:26.122 "dma_device_id": "system", 00:12:26.122 "dma_device_type": 1 00:12:26.122 }, 00:12:26.122 { 00:12:26.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.122 "dma_device_type": 2 00:12:26.122 } 00:12:26.122 ], 00:12:26.122 "driver_specific": {} 00:12:26.122 }' 00:12:26.122 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:26.122 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:26.122 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:26.122 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:26.122 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:26.380 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:26.380 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:26.380 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:26.380 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:26.380 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:26.380 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:26.380 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:26.380 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:26.380 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:26.380 10:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:26.638 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:26.638 "name": "BaseBdev3", 00:12:26.638 "aliases": [ 00:12:26.638 "712156a9-069e-45d1-93ef-7452239cd2fd" 00:12:26.638 ], 00:12:26.638 "product_name": "Malloc disk", 00:12:26.638 "block_size": 512, 00:12:26.638 "num_blocks": 65536, 00:12:26.638 "uuid": "712156a9-069e-45d1-93ef-7452239cd2fd", 00:12:26.638 "assigned_rate_limits": { 00:12:26.638 "rw_ios_per_sec": 0, 00:12:26.638 "rw_mbytes_per_sec": 0, 00:12:26.638 "r_mbytes_per_sec": 0, 00:12:26.638 "w_mbytes_per_sec": 0 00:12:26.638 }, 00:12:26.638 "claimed": true, 00:12:26.638 "claim_type": "exclusive_write", 00:12:26.638 "zoned": false, 00:12:26.638 "supported_io_types": { 00:12:26.638 "read": true, 00:12:26.638 "write": true, 00:12:26.638 "unmap": true, 00:12:26.638 "flush": true, 00:12:26.638 "reset": true, 00:12:26.638 "nvme_admin": false, 00:12:26.638 "nvme_io": false, 00:12:26.638 "nvme_io_md": false, 00:12:26.638 "write_zeroes": true, 00:12:26.638 "zcopy": true, 00:12:26.638 "get_zone_info": false, 00:12:26.638 "zone_management": false, 00:12:26.638 "zone_append": false, 00:12:26.638 "compare": false, 00:12:26.638 "compare_and_write": false, 00:12:26.638 "abort": true, 00:12:26.638 "seek_hole": false, 00:12:26.638 "seek_data": false, 00:12:26.638 "copy": true, 00:12:26.638 "nvme_iov_md": false 00:12:26.638 }, 00:12:26.638 "memory_domains": [ 00:12:26.638 { 00:12:26.638 "dma_device_id": "system", 00:12:26.638 "dma_device_type": 1 00:12:26.638 }, 00:12:26.638 { 00:12:26.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.638 "dma_device_type": 2 00:12:26.638 } 00:12:26.638 ], 00:12:26.638 "driver_specific": {} 00:12:26.638 }' 00:12:26.638 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:26.638 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:26.896 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:26.896 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:26.896 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:26.896 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:26.896 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:26.896 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:26.896 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:26.896 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:26.896 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:26.896 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:26.896 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:27.153 [2024-07-25 10:27:30.803364] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:27.153 [2024-07-25 10:27:30.803395] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.153 [2024-07-25 10:27:30.803456] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.153 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:27.153 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.154 10:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.411 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:27.411 "name": "Existed_Raid", 00:12:27.411 "uuid": "9eadd107-a64f-4f4d-b4ab-aa764a4cf627", 00:12:27.411 "strip_size_kb": 64, 00:12:27.411 "state": "offline", 00:12:27.411 "raid_level": "raid0", 00:12:27.411 "superblock": true, 00:12:27.411 "num_base_bdevs": 3, 00:12:27.411 "num_base_bdevs_discovered": 2, 00:12:27.411 "num_base_bdevs_operational": 2, 00:12:27.411 "base_bdevs_list": [ 00:12:27.411 { 00:12:27.411 "name": null, 00:12:27.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.412 "is_configured": false, 00:12:27.412 "data_offset": 2048, 00:12:27.412 "data_size": 63488 00:12:27.412 }, 00:12:27.412 { 00:12:27.412 "name": "BaseBdev2", 00:12:27.412 "uuid": "f88f2fd4-2840-4d51-bc7a-adfaf753b404", 00:12:27.412 "is_configured": true, 00:12:27.412 "data_offset": 2048, 00:12:27.412 "data_size": 63488 00:12:27.412 }, 00:12:27.412 { 00:12:27.412 "name": "BaseBdev3", 00:12:27.412 "uuid": "712156a9-069e-45d1-93ef-7452239cd2fd", 00:12:27.412 "is_configured": true, 00:12:27.412 "data_offset": 2048, 00:12:27.412 "data_size": 63488 00:12:27.412 } 00:12:27.412 ] 00:12:27.412 }' 00:12:27.412 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:27.412 10:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.976 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:27.976 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:27.976 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.976 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:28.234 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:28.234 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:28.234 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:28.492 [2024-07-25 10:27:32.168905] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:28.492 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:28.492 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:28.492 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.750 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:28.750 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:28.750 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:28.750 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:29.316 [2024-07-25 10:27:32.733321] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:29.316 [2024-07-25 10:27:32.733384] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10d6d90 name Existed_Raid, state offline 00:12:29.316 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:29.316 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:29.316 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.316 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:29.574 10:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:29.574 10:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:29.574 10:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:29.574 10:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:29.574 10:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:29.574 10:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:29.831 BaseBdev2 00:12:29.831 10:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:29.831 10:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:29.831 10:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:29.831 10:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:29.831 10:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:29.831 10:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:29.831 10:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:29.831 10:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:30.088 [ 00:12:30.088 { 00:12:30.088 "name": "BaseBdev2", 00:12:30.088 "aliases": [ 00:12:30.088 "93d20f5c-e161-4754-98c4-ca165f10128f" 00:12:30.088 ], 00:12:30.088 "product_name": "Malloc disk", 00:12:30.088 "block_size": 512, 00:12:30.088 "num_blocks": 65536, 00:12:30.088 "uuid": "93d20f5c-e161-4754-98c4-ca165f10128f", 00:12:30.088 "assigned_rate_limits": { 00:12:30.088 "rw_ios_per_sec": 0, 00:12:30.088 "rw_mbytes_per_sec": 0, 00:12:30.088 "r_mbytes_per_sec": 0, 00:12:30.088 "w_mbytes_per_sec": 0 00:12:30.088 }, 00:12:30.088 "claimed": false, 00:12:30.088 "zoned": false, 00:12:30.088 "supported_io_types": { 00:12:30.088 "read": true, 00:12:30.088 "write": true, 00:12:30.088 "unmap": true, 00:12:30.088 "flush": true, 00:12:30.088 "reset": true, 00:12:30.088 "nvme_admin": false, 00:12:30.088 "nvme_io": false, 00:12:30.088 "nvme_io_md": false, 00:12:30.088 "write_zeroes": true, 00:12:30.088 "zcopy": true, 00:12:30.088 "get_zone_info": false, 00:12:30.088 "zone_management": false, 00:12:30.088 "zone_append": false, 00:12:30.088 "compare": false, 00:12:30.088 "compare_and_write": false, 00:12:30.088 "abort": true, 00:12:30.088 "seek_hole": false, 00:12:30.088 "seek_data": false, 00:12:30.088 "copy": true, 00:12:30.088 "nvme_iov_md": false 00:12:30.088 }, 00:12:30.088 "memory_domains": [ 00:12:30.088 { 00:12:30.088 "dma_device_id": "system", 00:12:30.088 "dma_device_type": 1 00:12:30.088 }, 00:12:30.088 { 00:12:30.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.088 "dma_device_type": 2 00:12:30.088 } 00:12:30.088 ], 00:12:30.088 "driver_specific": {} 00:12:30.088 } 00:12:30.088 ] 00:12:30.345 10:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:30.345 10:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:30.345 10:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:30.345 10:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:30.602 BaseBdev3 00:12:30.602 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:30.602 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:30.602 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:30.602 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:30.602 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:30.602 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:30.603 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:30.860 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:31.119 [ 00:12:31.119 { 00:12:31.119 "name": "BaseBdev3", 00:12:31.119 "aliases": [ 00:12:31.119 "c354af05-bfc9-47a2-9694-4ec24cb005cb" 00:12:31.119 ], 00:12:31.119 "product_name": "Malloc disk", 00:12:31.119 "block_size": 512, 00:12:31.119 "num_blocks": 65536, 00:12:31.119 "uuid": "c354af05-bfc9-47a2-9694-4ec24cb005cb", 00:12:31.119 "assigned_rate_limits": { 00:12:31.119 "rw_ios_per_sec": 0, 00:12:31.119 "rw_mbytes_per_sec": 0, 00:12:31.119 "r_mbytes_per_sec": 0, 00:12:31.119 "w_mbytes_per_sec": 0 00:12:31.119 }, 00:12:31.119 "claimed": false, 00:12:31.119 "zoned": false, 00:12:31.119 "supported_io_types": { 00:12:31.119 "read": true, 00:12:31.119 "write": true, 00:12:31.119 "unmap": true, 00:12:31.119 "flush": true, 00:12:31.119 "reset": true, 00:12:31.119 "nvme_admin": false, 00:12:31.119 "nvme_io": false, 00:12:31.119 "nvme_io_md": false, 00:12:31.119 "write_zeroes": true, 00:12:31.119 "zcopy": true, 00:12:31.119 "get_zone_info": false, 00:12:31.119 "zone_management": false, 00:12:31.119 "zone_append": false, 00:12:31.119 "compare": false, 00:12:31.119 "compare_and_write": false, 00:12:31.119 "abort": true, 00:12:31.119 "seek_hole": false, 00:12:31.119 "seek_data": false, 00:12:31.119 "copy": true, 00:12:31.119 "nvme_iov_md": false 00:12:31.119 }, 00:12:31.119 "memory_domains": [ 00:12:31.119 { 00:12:31.119 "dma_device_id": "system", 00:12:31.119 "dma_device_type": 1 00:12:31.119 }, 00:12:31.119 { 00:12:31.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.119 "dma_device_type": 2 00:12:31.119 } 00:12:31.119 ], 00:12:31.119 "driver_specific": {} 00:12:31.119 } 00:12:31.119 ] 00:12:31.119 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:31.119 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:31.119 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:31.119 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:31.119 [2024-07-25 10:27:34.820978] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.119 [2024-07-25 10:27:34.821023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.119 [2024-07-25 10:27:34.821066] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:31.119 [2024-07-25 10:27:34.822533] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:31.377 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:31.377 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:31.377 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:31.377 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:31.377 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:31.377 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:31.377 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:31.378 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:31.378 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:31.378 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:31.378 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.378 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.378 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:31.378 "name": "Existed_Raid", 00:12:31.378 "uuid": "ca55ac15-a24e-4673-a840-1f4620e0ab8e", 00:12:31.378 "strip_size_kb": 64, 00:12:31.378 "state": "configuring", 00:12:31.378 "raid_level": "raid0", 00:12:31.378 "superblock": true, 00:12:31.378 "num_base_bdevs": 3, 00:12:31.378 "num_base_bdevs_discovered": 2, 00:12:31.378 "num_base_bdevs_operational": 3, 00:12:31.378 "base_bdevs_list": [ 00:12:31.378 { 00:12:31.378 "name": "BaseBdev1", 00:12:31.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.378 "is_configured": false, 00:12:31.378 "data_offset": 0, 00:12:31.378 "data_size": 0 00:12:31.378 }, 00:12:31.378 { 00:12:31.378 "name": "BaseBdev2", 00:12:31.378 "uuid": "93d20f5c-e161-4754-98c4-ca165f10128f", 00:12:31.378 "is_configured": true, 00:12:31.378 "data_offset": 2048, 00:12:31.378 "data_size": 63488 00:12:31.378 }, 00:12:31.378 { 00:12:31.378 "name": "BaseBdev3", 00:12:31.378 "uuid": "c354af05-bfc9-47a2-9694-4ec24cb005cb", 00:12:31.378 "is_configured": true, 00:12:31.378 "data_offset": 2048, 00:12:31.378 "data_size": 63488 00:12:31.378 } 00:12:31.378 ] 00:12:31.378 }' 00:12:31.378 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:31.378 10:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.944 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:32.510 [2024-07-25 10:27:35.923901] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:32.510 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:32.510 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:32.510 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:32.510 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:32.510 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:32.510 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:32.510 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:32.510 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:32.510 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:32.510 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:32.510 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.510 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.510 10:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:32.510 "name": "Existed_Raid", 00:12:32.510 "uuid": "ca55ac15-a24e-4673-a840-1f4620e0ab8e", 00:12:32.510 "strip_size_kb": 64, 00:12:32.510 "state": "configuring", 00:12:32.510 "raid_level": "raid0", 00:12:32.510 "superblock": true, 00:12:32.510 "num_base_bdevs": 3, 00:12:32.510 "num_base_bdevs_discovered": 1, 00:12:32.510 "num_base_bdevs_operational": 3, 00:12:32.510 "base_bdevs_list": [ 00:12:32.510 { 00:12:32.510 "name": "BaseBdev1", 00:12:32.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.510 "is_configured": false, 00:12:32.510 "data_offset": 0, 00:12:32.510 "data_size": 0 00:12:32.510 }, 00:12:32.510 { 00:12:32.510 "name": null, 00:12:32.510 "uuid": "93d20f5c-e161-4754-98c4-ca165f10128f", 00:12:32.510 "is_configured": false, 00:12:32.510 "data_offset": 2048, 00:12:32.510 "data_size": 63488 00:12:32.510 }, 00:12:32.510 { 00:12:32.510 "name": "BaseBdev3", 00:12:32.510 "uuid": "c354af05-bfc9-47a2-9694-4ec24cb005cb", 00:12:32.510 "is_configured": true, 00:12:32.510 "data_offset": 2048, 00:12:32.510 "data_size": 63488 00:12:32.510 } 00:12:32.510 ] 00:12:32.510 }' 00:12:32.510 10:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:32.510 10:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.074 10:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.074 10:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:33.331 10:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:33.331 10:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:33.589 [2024-07-25 10:27:37.266312] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.589 BaseBdev1 00:12:33.589 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:33.589 10:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:33.589 10:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:33.589 10:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:33.589 10:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:33.589 10:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:33.589 10:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:34.155 10:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:34.155 [ 00:12:34.155 { 00:12:34.155 "name": "BaseBdev1", 00:12:34.155 "aliases": [ 00:12:34.155 "72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2" 00:12:34.155 ], 00:12:34.155 "product_name": "Malloc disk", 00:12:34.155 "block_size": 512, 00:12:34.155 "num_blocks": 65536, 00:12:34.155 "uuid": "72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2", 00:12:34.155 "assigned_rate_limits": { 00:12:34.155 "rw_ios_per_sec": 0, 00:12:34.155 "rw_mbytes_per_sec": 0, 00:12:34.155 "r_mbytes_per_sec": 0, 00:12:34.155 "w_mbytes_per_sec": 0 00:12:34.155 }, 00:12:34.155 "claimed": true, 00:12:34.155 "claim_type": "exclusive_write", 00:12:34.155 "zoned": false, 00:12:34.155 "supported_io_types": { 00:12:34.155 "read": true, 00:12:34.155 "write": true, 00:12:34.155 "unmap": true, 00:12:34.155 "flush": true, 00:12:34.155 "reset": true, 00:12:34.155 "nvme_admin": false, 00:12:34.155 "nvme_io": false, 00:12:34.155 "nvme_io_md": false, 00:12:34.155 "write_zeroes": true, 00:12:34.155 "zcopy": true, 00:12:34.155 "get_zone_info": false, 00:12:34.156 "zone_management": false, 00:12:34.156 "zone_append": false, 00:12:34.156 "compare": false, 00:12:34.156 "compare_and_write": false, 00:12:34.156 "abort": true, 00:12:34.156 "seek_hole": false, 00:12:34.156 "seek_data": false, 00:12:34.156 "copy": true, 00:12:34.156 "nvme_iov_md": false 00:12:34.156 }, 00:12:34.156 "memory_domains": [ 00:12:34.156 { 00:12:34.156 "dma_device_id": "system", 00:12:34.156 "dma_device_type": 1 00:12:34.156 }, 00:12:34.156 { 00:12:34.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.156 "dma_device_type": 2 00:12:34.156 } 00:12:34.156 ], 00:12:34.156 "driver_specific": {} 00:12:34.156 } 00:12:34.156 ] 00:12:34.156 10:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:34.156 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:34.156 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:34.156 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:34.156 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:34.156 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:34.156 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:34.156 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:34.156 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:34.156 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:34.156 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:34.156 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.156 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.414 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:34.414 "name": "Existed_Raid", 00:12:34.414 "uuid": "ca55ac15-a24e-4673-a840-1f4620e0ab8e", 00:12:34.414 "strip_size_kb": 64, 00:12:34.414 "state": "configuring", 00:12:34.414 "raid_level": "raid0", 00:12:34.414 "superblock": true, 00:12:34.414 "num_base_bdevs": 3, 00:12:34.414 "num_base_bdevs_discovered": 2, 00:12:34.414 "num_base_bdevs_operational": 3, 00:12:34.414 "base_bdevs_list": [ 00:12:34.414 { 00:12:34.414 "name": "BaseBdev1", 00:12:34.414 "uuid": "72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2", 00:12:34.414 "is_configured": true, 00:12:34.414 "data_offset": 2048, 00:12:34.414 "data_size": 63488 00:12:34.414 }, 00:12:34.414 { 00:12:34.414 "name": null, 00:12:34.414 "uuid": "93d20f5c-e161-4754-98c4-ca165f10128f", 00:12:34.414 "is_configured": false, 00:12:34.414 "data_offset": 2048, 00:12:34.414 "data_size": 63488 00:12:34.414 }, 00:12:34.414 { 00:12:34.414 "name": "BaseBdev3", 00:12:34.414 "uuid": "c354af05-bfc9-47a2-9694-4ec24cb005cb", 00:12:34.414 "is_configured": true, 00:12:34.414 "data_offset": 2048, 00:12:34.414 "data_size": 63488 00:12:34.414 } 00:12:34.414 ] 00:12:34.414 }' 00:12:34.414 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:34.414 10:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.980 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.980 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:35.237 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:35.237 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:35.496 [2024-07-25 10:27:39.127250] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:35.496 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:35.496 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:35.496 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:35.496 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:35.496 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:35.496 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:35.496 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.496 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.496 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.496 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.496 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.496 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.754 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:35.754 "name": "Existed_Raid", 00:12:35.754 "uuid": "ca55ac15-a24e-4673-a840-1f4620e0ab8e", 00:12:35.754 "strip_size_kb": 64, 00:12:35.754 "state": "configuring", 00:12:35.754 "raid_level": "raid0", 00:12:35.754 "superblock": true, 00:12:35.754 "num_base_bdevs": 3, 00:12:35.754 "num_base_bdevs_discovered": 1, 00:12:35.754 "num_base_bdevs_operational": 3, 00:12:35.754 "base_bdevs_list": [ 00:12:35.754 { 00:12:35.754 "name": "BaseBdev1", 00:12:35.754 "uuid": "72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2", 00:12:35.754 "is_configured": true, 00:12:35.754 "data_offset": 2048, 00:12:35.754 "data_size": 63488 00:12:35.754 }, 00:12:35.754 { 00:12:35.754 "name": null, 00:12:35.754 "uuid": "93d20f5c-e161-4754-98c4-ca165f10128f", 00:12:35.754 "is_configured": false, 00:12:35.754 "data_offset": 2048, 00:12:35.754 "data_size": 63488 00:12:35.754 }, 00:12:35.754 { 00:12:35.754 "name": null, 00:12:35.754 "uuid": "c354af05-bfc9-47a2-9694-4ec24cb005cb", 00:12:35.754 "is_configured": false, 00:12:35.754 "data_offset": 2048, 00:12:35.754 "data_size": 63488 00:12:35.754 } 00:12:35.754 ] 00:12:35.754 }' 00:12:35.754 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:35.754 10:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.319 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.319 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:36.578 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:36.578 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:36.837 [2024-07-25 10:27:40.454764] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.837 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:36.837 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:36.837 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:36.837 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:36.837 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:36.837 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:36.837 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:36.837 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:36.837 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:36.837 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:36.837 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.838 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.096 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:37.096 "name": "Existed_Raid", 00:12:37.096 "uuid": "ca55ac15-a24e-4673-a840-1f4620e0ab8e", 00:12:37.096 "strip_size_kb": 64, 00:12:37.096 "state": "configuring", 00:12:37.096 "raid_level": "raid0", 00:12:37.096 "superblock": true, 00:12:37.096 "num_base_bdevs": 3, 00:12:37.096 "num_base_bdevs_discovered": 2, 00:12:37.096 "num_base_bdevs_operational": 3, 00:12:37.096 "base_bdevs_list": [ 00:12:37.096 { 00:12:37.096 "name": "BaseBdev1", 00:12:37.096 "uuid": "72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2", 00:12:37.096 "is_configured": true, 00:12:37.096 "data_offset": 2048, 00:12:37.096 "data_size": 63488 00:12:37.096 }, 00:12:37.096 { 00:12:37.096 "name": null, 00:12:37.096 "uuid": "93d20f5c-e161-4754-98c4-ca165f10128f", 00:12:37.096 "is_configured": false, 00:12:37.096 "data_offset": 2048, 00:12:37.096 "data_size": 63488 00:12:37.096 }, 00:12:37.096 { 00:12:37.096 "name": "BaseBdev3", 00:12:37.096 "uuid": "c354af05-bfc9-47a2-9694-4ec24cb005cb", 00:12:37.096 "is_configured": true, 00:12:37.096 "data_offset": 2048, 00:12:37.096 "data_size": 63488 00:12:37.096 } 00:12:37.096 ] 00:12:37.096 }' 00:12:37.096 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:37.096 10:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.662 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.662 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:37.920 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:37.920 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:38.179 [2024-07-25 10:27:41.722158] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.179 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:38.179 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:38.179 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:38.179 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:38.179 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:38.179 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:38.179 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:38.179 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:38.179 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:38.179 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:38.179 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.179 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.437 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:38.437 "name": "Existed_Raid", 00:12:38.437 "uuid": "ca55ac15-a24e-4673-a840-1f4620e0ab8e", 00:12:38.437 "strip_size_kb": 64, 00:12:38.437 "state": "configuring", 00:12:38.437 "raid_level": "raid0", 00:12:38.437 "superblock": true, 00:12:38.437 "num_base_bdevs": 3, 00:12:38.437 "num_base_bdevs_discovered": 1, 00:12:38.437 "num_base_bdevs_operational": 3, 00:12:38.437 "base_bdevs_list": [ 00:12:38.437 { 00:12:38.437 "name": null, 00:12:38.437 "uuid": "72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2", 00:12:38.437 "is_configured": false, 00:12:38.437 "data_offset": 2048, 00:12:38.437 "data_size": 63488 00:12:38.437 }, 00:12:38.437 { 00:12:38.437 "name": null, 00:12:38.437 "uuid": "93d20f5c-e161-4754-98c4-ca165f10128f", 00:12:38.437 "is_configured": false, 00:12:38.437 "data_offset": 2048, 00:12:38.437 "data_size": 63488 00:12:38.437 }, 00:12:38.437 { 00:12:38.437 "name": "BaseBdev3", 00:12:38.437 "uuid": "c354af05-bfc9-47a2-9694-4ec24cb005cb", 00:12:38.437 "is_configured": true, 00:12:38.437 "data_offset": 2048, 00:12:38.437 "data_size": 63488 00:12:38.437 } 00:12:38.437 ] 00:12:38.437 }' 00:12:38.437 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:38.437 10:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.003 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.003 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:39.261 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:39.261 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:39.520 [2024-07-25 10:27:43.045941] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.520 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:39.520 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:39.520 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:39.520 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:39.520 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:39.520 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:39.520 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:39.520 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:39.520 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:39.520 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:39.520 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.520 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.835 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:39.836 "name": "Existed_Raid", 00:12:39.836 "uuid": "ca55ac15-a24e-4673-a840-1f4620e0ab8e", 00:12:39.836 "strip_size_kb": 64, 00:12:39.836 "state": "configuring", 00:12:39.836 "raid_level": "raid0", 00:12:39.836 "superblock": true, 00:12:39.836 "num_base_bdevs": 3, 00:12:39.836 "num_base_bdevs_discovered": 2, 00:12:39.836 "num_base_bdevs_operational": 3, 00:12:39.836 "base_bdevs_list": [ 00:12:39.836 { 00:12:39.836 "name": null, 00:12:39.836 "uuid": "72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2", 00:12:39.836 "is_configured": false, 00:12:39.836 "data_offset": 2048, 00:12:39.836 "data_size": 63488 00:12:39.836 }, 00:12:39.836 { 00:12:39.836 "name": "BaseBdev2", 00:12:39.836 "uuid": "93d20f5c-e161-4754-98c4-ca165f10128f", 00:12:39.836 "is_configured": true, 00:12:39.836 "data_offset": 2048, 00:12:39.836 "data_size": 63488 00:12:39.836 }, 00:12:39.836 { 00:12:39.836 "name": "BaseBdev3", 00:12:39.836 "uuid": "c354af05-bfc9-47a2-9694-4ec24cb005cb", 00:12:39.836 "is_configured": true, 00:12:39.836 "data_offset": 2048, 00:12:39.836 "data_size": 63488 00:12:39.836 } 00:12:39.836 ] 00:12:39.836 }' 00:12:39.836 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:39.836 10:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.401 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.402 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:40.402 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:40.402 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.402 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:40.661 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2 00:12:40.919 [2024-07-25 10:27:44.565516] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:40.919 [2024-07-25 10:27:44.565745] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x10ccd90 00:12:40.919 [2024-07-25 10:27:44.565775] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:40.919 [2024-07-25 10:27:44.565928] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10daa90 00:12:40.919 [2024-07-25 10:27:44.566059] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10ccd90 00:12:40.919 [2024-07-25 10:27:44.566073] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10ccd90 00:12:40.919 [2024-07-25 10:27:44.566182] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.919 NewBaseBdev 00:12:40.919 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:40.919 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:40.919 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:40.919 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:40.919 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:40.919 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:40.919 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:41.178 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:41.436 [ 00:12:41.436 { 00:12:41.436 "name": "NewBaseBdev", 00:12:41.436 "aliases": [ 00:12:41.436 "72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2" 00:12:41.436 ], 00:12:41.436 "product_name": "Malloc disk", 00:12:41.436 "block_size": 512, 00:12:41.436 "num_blocks": 65536, 00:12:41.436 "uuid": "72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2", 00:12:41.436 "assigned_rate_limits": { 00:12:41.436 "rw_ios_per_sec": 0, 00:12:41.436 "rw_mbytes_per_sec": 0, 00:12:41.436 "r_mbytes_per_sec": 0, 00:12:41.436 "w_mbytes_per_sec": 0 00:12:41.436 }, 00:12:41.436 "claimed": true, 00:12:41.436 "claim_type": "exclusive_write", 00:12:41.436 "zoned": false, 00:12:41.436 "supported_io_types": { 00:12:41.436 "read": true, 00:12:41.436 "write": true, 00:12:41.436 "unmap": true, 00:12:41.436 "flush": true, 00:12:41.436 "reset": true, 00:12:41.436 "nvme_admin": false, 00:12:41.436 "nvme_io": false, 00:12:41.436 "nvme_io_md": false, 00:12:41.436 "write_zeroes": true, 00:12:41.436 "zcopy": true, 00:12:41.436 "get_zone_info": false, 00:12:41.436 "zone_management": false, 00:12:41.436 "zone_append": false, 00:12:41.436 "compare": false, 00:12:41.436 "compare_and_write": false, 00:12:41.436 "abort": true, 00:12:41.436 "seek_hole": false, 00:12:41.436 "seek_data": false, 00:12:41.436 "copy": true, 00:12:41.436 "nvme_iov_md": false 00:12:41.436 }, 00:12:41.436 "memory_domains": [ 00:12:41.436 { 00:12:41.436 "dma_device_id": "system", 00:12:41.436 "dma_device_type": 1 00:12:41.436 }, 00:12:41.436 { 00:12:41.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.436 "dma_device_type": 2 00:12:41.436 } 00:12:41.436 ], 00:12:41.436 "driver_specific": {} 00:12:41.436 } 00:12:41.436 ] 00:12:41.436 10:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:41.436 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:41.436 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:41.436 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:41.436 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:41.436 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:41.436 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:41.436 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.436 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.436 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.436 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.436 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.436 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.695 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.695 "name": "Existed_Raid", 00:12:41.695 "uuid": "ca55ac15-a24e-4673-a840-1f4620e0ab8e", 00:12:41.695 "strip_size_kb": 64, 00:12:41.695 "state": "online", 00:12:41.695 "raid_level": "raid0", 00:12:41.695 "superblock": true, 00:12:41.695 "num_base_bdevs": 3, 00:12:41.695 "num_base_bdevs_discovered": 3, 00:12:41.695 "num_base_bdevs_operational": 3, 00:12:41.695 "base_bdevs_list": [ 00:12:41.695 { 00:12:41.695 "name": "NewBaseBdev", 00:12:41.695 "uuid": "72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2", 00:12:41.695 "is_configured": true, 00:12:41.695 "data_offset": 2048, 00:12:41.695 "data_size": 63488 00:12:41.695 }, 00:12:41.695 { 00:12:41.695 "name": "BaseBdev2", 00:12:41.695 "uuid": "93d20f5c-e161-4754-98c4-ca165f10128f", 00:12:41.695 "is_configured": true, 00:12:41.695 "data_offset": 2048, 00:12:41.695 "data_size": 63488 00:12:41.695 }, 00:12:41.695 { 00:12:41.695 "name": "BaseBdev3", 00:12:41.695 "uuid": "c354af05-bfc9-47a2-9694-4ec24cb005cb", 00:12:41.695 "is_configured": true, 00:12:41.695 "data_offset": 2048, 00:12:41.695 "data_size": 63488 00:12:41.695 } 00:12:41.695 ] 00:12:41.695 }' 00:12:41.695 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.695 10:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.260 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:42.260 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:42.260 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:42.260 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:42.260 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:42.260 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:42.260 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:42.260 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:42.518 [2024-07-25 10:27:46.125828] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.518 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:42.518 "name": "Existed_Raid", 00:12:42.518 "aliases": [ 00:12:42.518 "ca55ac15-a24e-4673-a840-1f4620e0ab8e" 00:12:42.518 ], 00:12:42.518 "product_name": "Raid Volume", 00:12:42.518 "block_size": 512, 00:12:42.518 "num_blocks": 190464, 00:12:42.518 "uuid": "ca55ac15-a24e-4673-a840-1f4620e0ab8e", 00:12:42.518 "assigned_rate_limits": { 00:12:42.518 "rw_ios_per_sec": 0, 00:12:42.518 "rw_mbytes_per_sec": 0, 00:12:42.518 "r_mbytes_per_sec": 0, 00:12:42.518 "w_mbytes_per_sec": 0 00:12:42.518 }, 00:12:42.518 "claimed": false, 00:12:42.518 "zoned": false, 00:12:42.518 "supported_io_types": { 00:12:42.518 "read": true, 00:12:42.518 "write": true, 00:12:42.518 "unmap": true, 00:12:42.518 "flush": true, 00:12:42.518 "reset": true, 00:12:42.518 "nvme_admin": false, 00:12:42.518 "nvme_io": false, 00:12:42.518 "nvme_io_md": false, 00:12:42.518 "write_zeroes": true, 00:12:42.518 "zcopy": false, 00:12:42.518 "get_zone_info": false, 00:12:42.518 "zone_management": false, 00:12:42.518 "zone_append": false, 00:12:42.518 "compare": false, 00:12:42.518 "compare_and_write": false, 00:12:42.518 "abort": false, 00:12:42.518 "seek_hole": false, 00:12:42.518 "seek_data": false, 00:12:42.518 "copy": false, 00:12:42.518 "nvme_iov_md": false 00:12:42.518 }, 00:12:42.518 "memory_domains": [ 00:12:42.518 { 00:12:42.518 "dma_device_id": "system", 00:12:42.518 "dma_device_type": 1 00:12:42.518 }, 00:12:42.518 { 00:12:42.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.518 "dma_device_type": 2 00:12:42.518 }, 00:12:42.518 { 00:12:42.518 "dma_device_id": "system", 00:12:42.518 "dma_device_type": 1 00:12:42.518 }, 00:12:42.518 { 00:12:42.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.518 "dma_device_type": 2 00:12:42.518 }, 00:12:42.518 { 00:12:42.518 "dma_device_id": "system", 00:12:42.518 "dma_device_type": 1 00:12:42.518 }, 00:12:42.518 { 00:12:42.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.518 "dma_device_type": 2 00:12:42.518 } 00:12:42.518 ], 00:12:42.518 "driver_specific": { 00:12:42.518 "raid": { 00:12:42.518 "uuid": "ca55ac15-a24e-4673-a840-1f4620e0ab8e", 00:12:42.518 "strip_size_kb": 64, 00:12:42.518 "state": "online", 00:12:42.518 "raid_level": "raid0", 00:12:42.518 "superblock": true, 00:12:42.518 "num_base_bdevs": 3, 00:12:42.518 "num_base_bdevs_discovered": 3, 00:12:42.518 "num_base_bdevs_operational": 3, 00:12:42.518 "base_bdevs_list": [ 00:12:42.518 { 00:12:42.518 "name": "NewBaseBdev", 00:12:42.518 "uuid": "72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2", 00:12:42.518 "is_configured": true, 00:12:42.518 "data_offset": 2048, 00:12:42.518 "data_size": 63488 00:12:42.518 }, 00:12:42.518 { 00:12:42.518 "name": "BaseBdev2", 00:12:42.518 "uuid": "93d20f5c-e161-4754-98c4-ca165f10128f", 00:12:42.518 "is_configured": true, 00:12:42.518 "data_offset": 2048, 00:12:42.518 "data_size": 63488 00:12:42.518 }, 00:12:42.518 { 00:12:42.519 "name": "BaseBdev3", 00:12:42.519 "uuid": "c354af05-bfc9-47a2-9694-4ec24cb005cb", 00:12:42.519 "is_configured": true, 00:12:42.519 "data_offset": 2048, 00:12:42.519 "data_size": 63488 00:12:42.519 } 00:12:42.519 ] 00:12:42.519 } 00:12:42.519 } 00:12:42.519 }' 00:12:42.519 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.519 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:42.519 BaseBdev2 00:12:42.519 BaseBdev3' 00:12:42.519 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:42.519 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:42.519 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:42.777 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:42.777 "name": "NewBaseBdev", 00:12:42.777 "aliases": [ 00:12:42.777 "72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2" 00:12:42.777 ], 00:12:42.777 "product_name": "Malloc disk", 00:12:42.777 "block_size": 512, 00:12:42.777 "num_blocks": 65536, 00:12:42.777 "uuid": "72862c1c-bcbd-448e-9ed0-27cdc1e3c3a2", 00:12:42.777 "assigned_rate_limits": { 00:12:42.777 "rw_ios_per_sec": 0, 00:12:42.777 "rw_mbytes_per_sec": 0, 00:12:42.777 "r_mbytes_per_sec": 0, 00:12:42.777 "w_mbytes_per_sec": 0 00:12:42.777 }, 00:12:42.777 "claimed": true, 00:12:42.777 "claim_type": "exclusive_write", 00:12:42.777 "zoned": false, 00:12:42.777 "supported_io_types": { 00:12:42.777 "read": true, 00:12:42.777 "write": true, 00:12:42.777 "unmap": true, 00:12:42.777 "flush": true, 00:12:42.777 "reset": true, 00:12:42.777 "nvme_admin": false, 00:12:42.777 "nvme_io": false, 00:12:42.777 "nvme_io_md": false, 00:12:42.777 "write_zeroes": true, 00:12:42.777 "zcopy": true, 00:12:42.777 "get_zone_info": false, 00:12:42.777 "zone_management": false, 00:12:42.777 "zone_append": false, 00:12:42.777 "compare": false, 00:12:42.777 "compare_and_write": false, 00:12:42.777 "abort": true, 00:12:42.777 "seek_hole": false, 00:12:42.777 "seek_data": false, 00:12:42.777 "copy": true, 00:12:42.777 "nvme_iov_md": false 00:12:42.777 }, 00:12:42.777 "memory_domains": [ 00:12:42.777 { 00:12:42.777 "dma_device_id": "system", 00:12:42.777 "dma_device_type": 1 00:12:42.777 }, 00:12:42.777 { 00:12:42.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.777 "dma_device_type": 2 00:12:42.777 } 00:12:42.777 ], 00:12:42.777 "driver_specific": {} 00:12:42.777 }' 00:12:42.777 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:42.777 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:42.777 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:42.777 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.036 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.036 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:43.036 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.036 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.036 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:43.036 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.036 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.036 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:43.036 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.036 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:43.036 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:43.294 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:43.294 "name": "BaseBdev2", 00:12:43.294 "aliases": [ 00:12:43.294 "93d20f5c-e161-4754-98c4-ca165f10128f" 00:12:43.294 ], 00:12:43.294 "product_name": "Malloc disk", 00:12:43.294 "block_size": 512, 00:12:43.294 "num_blocks": 65536, 00:12:43.294 "uuid": "93d20f5c-e161-4754-98c4-ca165f10128f", 00:12:43.294 "assigned_rate_limits": { 00:12:43.294 "rw_ios_per_sec": 0, 00:12:43.294 "rw_mbytes_per_sec": 0, 00:12:43.294 "r_mbytes_per_sec": 0, 00:12:43.295 "w_mbytes_per_sec": 0 00:12:43.295 }, 00:12:43.295 "claimed": true, 00:12:43.295 "claim_type": "exclusive_write", 00:12:43.295 "zoned": false, 00:12:43.295 "supported_io_types": { 00:12:43.295 "read": true, 00:12:43.295 "write": true, 00:12:43.295 "unmap": true, 00:12:43.295 "flush": true, 00:12:43.295 "reset": true, 00:12:43.295 "nvme_admin": false, 00:12:43.295 "nvme_io": false, 00:12:43.295 "nvme_io_md": false, 00:12:43.295 "write_zeroes": true, 00:12:43.295 "zcopy": true, 00:12:43.295 "get_zone_info": false, 00:12:43.295 "zone_management": false, 00:12:43.295 "zone_append": false, 00:12:43.295 "compare": false, 00:12:43.295 "compare_and_write": false, 00:12:43.295 "abort": true, 00:12:43.295 "seek_hole": false, 00:12:43.295 "seek_data": false, 00:12:43.295 "copy": true, 00:12:43.295 "nvme_iov_md": false 00:12:43.295 }, 00:12:43.295 "memory_domains": [ 00:12:43.295 { 00:12:43.295 "dma_device_id": "system", 00:12:43.295 "dma_device_type": 1 00:12:43.295 }, 00:12:43.295 { 00:12:43.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.295 "dma_device_type": 2 00:12:43.295 } 00:12:43.295 ], 00:12:43.295 "driver_specific": {} 00:12:43.295 }' 00:12:43.295 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.295 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.295 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:43.295 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.552 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.552 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:43.552 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.552 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.552 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:43.552 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.552 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.552 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:43.552 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.552 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:43.552 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:43.810 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:43.810 "name": "BaseBdev3", 00:12:43.810 "aliases": [ 00:12:43.810 "c354af05-bfc9-47a2-9694-4ec24cb005cb" 00:12:43.810 ], 00:12:43.810 "product_name": "Malloc disk", 00:12:43.810 "block_size": 512, 00:12:43.810 "num_blocks": 65536, 00:12:43.810 "uuid": "c354af05-bfc9-47a2-9694-4ec24cb005cb", 00:12:43.810 "assigned_rate_limits": { 00:12:43.810 "rw_ios_per_sec": 0, 00:12:43.810 "rw_mbytes_per_sec": 0, 00:12:43.810 "r_mbytes_per_sec": 0, 00:12:43.810 "w_mbytes_per_sec": 0 00:12:43.810 }, 00:12:43.810 "claimed": true, 00:12:43.810 "claim_type": "exclusive_write", 00:12:43.810 "zoned": false, 00:12:43.810 "supported_io_types": { 00:12:43.810 "read": true, 00:12:43.810 "write": true, 00:12:43.810 "unmap": true, 00:12:43.810 "flush": true, 00:12:43.810 "reset": true, 00:12:43.810 "nvme_admin": false, 00:12:43.810 "nvme_io": false, 00:12:43.810 "nvme_io_md": false, 00:12:43.810 "write_zeroes": true, 00:12:43.810 "zcopy": true, 00:12:43.810 "get_zone_info": false, 00:12:43.810 "zone_management": false, 00:12:43.810 "zone_append": false, 00:12:43.810 "compare": false, 00:12:43.810 "compare_and_write": false, 00:12:43.810 "abort": true, 00:12:43.810 "seek_hole": false, 00:12:43.810 "seek_data": false, 00:12:43.810 "copy": true, 00:12:43.810 "nvme_iov_md": false 00:12:43.810 }, 00:12:43.810 "memory_domains": [ 00:12:43.810 { 00:12:43.810 "dma_device_id": "system", 00:12:43.810 "dma_device_type": 1 00:12:43.810 }, 00:12:43.810 { 00:12:43.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.810 "dma_device_type": 2 00:12:43.810 } 00:12:43.810 ], 00:12:43.810 "driver_specific": {} 00:12:43.810 }' 00:12:43.810 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.067 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.067 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:44.067 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.067 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.067 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:44.067 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.067 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.067 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:44.067 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.067 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.067 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:44.067 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:44.324 [2024-07-25 10:27:47.994480] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.324 [2024-07-25 10:27:47.994504] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.324 [2024-07-25 10:27:47.994556] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.324 [2024-07-25 10:27:47.994611] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.324 [2024-07-25 10:27:47.994624] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10ccd90 name Existed_Raid, state offline 00:12:44.324 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2355508 00:12:44.324 10:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2355508 ']' 00:12:44.324 10:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2355508 00:12:44.324 10:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:44.324 10:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:44.324 10:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2355508 00:12:44.582 10:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:44.582 10:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:44.582 10:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2355508' 00:12:44.582 killing process with pid 2355508 00:12:44.582 10:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2355508 00:12:44.582 [2024-07-25 10:27:48.041434] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.582 10:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2355508 00:12:44.582 [2024-07-25 10:27:48.096059] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:44.839 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:44.839 00:12:44.839 real 0m28.587s 00:12:44.839 user 0m53.066s 00:12:44.839 sys 0m3.928s 00:12:44.839 10:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.839 10:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.839 ************************************ 00:12:44.839 END TEST raid_state_function_test_sb 00:12:44.839 ************************************ 00:12:45.097 10:27:48 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:12:45.097 10:27:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:45.097 10:27:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.097 10:27:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.097 ************************************ 00:12:45.097 START TEST raid_superblock_test 00:12:45.097 ************************************ 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2359442 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2359442 /var/tmp/spdk-raid.sock 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2359442 ']' 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:45.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:45.097 10:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.097 [2024-07-25 10:27:48.642794] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:12:45.097 [2024-07-25 10:27:48.642877] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359442 ] 00:12:45.097 [2024-07-25 10:27:48.720191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.354 [2024-07-25 10:27:48.832479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.355 [2024-07-25 10:27:48.907489] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.355 [2024-07-25 10:27:48.907533] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.919 10:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.919 10:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:45.919 10:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:45.919 10:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:45.919 10:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:45.919 10:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:45.919 10:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:45.919 10:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:45.919 10:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:45.919 10:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:45.919 10:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:46.176 malloc1 00:12:46.176 10:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:46.434 [2024-07-25 10:27:50.086676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:46.434 [2024-07-25 10:27:50.086745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.434 [2024-07-25 10:27:50.086772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26432b0 00:12:46.434 [2024-07-25 10:27:50.086785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.434 [2024-07-25 10:27:50.088436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.435 [2024-07-25 10:27:50.088473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:46.435 pt1 00:12:46.435 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:46.435 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:46.435 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:46.435 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:46.435 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:46.435 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:46.435 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:46.435 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:46.435 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:46.692 malloc2 00:12:46.692 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:46.950 [2024-07-25 10:27:50.599224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:46.950 [2024-07-25 10:27:50.599304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.950 [2024-07-25 10:27:50.599328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27f61e0 00:12:46.950 [2024-07-25 10:27:50.599342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.950 [2024-07-25 10:27:50.600989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.950 [2024-07-25 10:27:50.601012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:46.950 pt2 00:12:46.950 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:46.950 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:46.950 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:12:46.950 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:12:46.950 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:46.950 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:46.950 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:46.950 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:46.950 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:47.209 malloc3 00:12:47.209 10:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:47.467 [2024-07-25 10:27:51.099669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:47.467 [2024-07-25 10:27:51.099743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.467 [2024-07-25 10:27:51.099767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27dc4d0 00:12:47.467 [2024-07-25 10:27:51.099780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.467 [2024-07-25 10:27:51.101497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.467 [2024-07-25 10:27:51.101520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:47.467 pt3 00:12:47.467 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:47.467 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:47.467 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:47.724 [2024-07-25 10:27:51.348360] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:47.724 [2024-07-25 10:27:51.349668] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:47.724 [2024-07-25 10:27:51.349723] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:47.724 [2024-07-25 10:27:51.349884] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x27db120 00:12:47.724 [2024-07-25 10:27:51.349898] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:47.724 [2024-07-25 10:27:51.350098] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x263adc0 00:12:47.724 [2024-07-25 10:27:51.350291] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x27db120 00:12:47.724 [2024-07-25 10:27:51.350305] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x27db120 00:12:47.724 [2024-07-25 10:27:51.350441] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.724 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:47.724 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:47.724 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:47.724 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:47.724 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:47.724 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:47.724 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.724 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.724 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.724 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.725 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.725 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.982 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:47.982 "name": "raid_bdev1", 00:12:47.982 "uuid": "bdba6ffa-c678-4d40-b158-5490e1085e59", 00:12:47.982 "strip_size_kb": 64, 00:12:47.982 "state": "online", 00:12:47.982 "raid_level": "raid0", 00:12:47.982 "superblock": true, 00:12:47.982 "num_base_bdevs": 3, 00:12:47.982 "num_base_bdevs_discovered": 3, 00:12:47.982 "num_base_bdevs_operational": 3, 00:12:47.982 "base_bdevs_list": [ 00:12:47.982 { 00:12:47.982 "name": "pt1", 00:12:47.982 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.982 "is_configured": true, 00:12:47.982 "data_offset": 2048, 00:12:47.982 "data_size": 63488 00:12:47.982 }, 00:12:47.982 { 00:12:47.982 "name": "pt2", 00:12:47.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.982 "is_configured": true, 00:12:47.982 "data_offset": 2048, 00:12:47.982 "data_size": 63488 00:12:47.982 }, 00:12:47.982 { 00:12:47.982 "name": "pt3", 00:12:47.982 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.982 "is_configured": true, 00:12:47.982 "data_offset": 2048, 00:12:47.982 "data_size": 63488 00:12:47.982 } 00:12:47.982 ] 00:12:47.982 }' 00:12:47.982 10:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:47.982 10:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.548 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:48.548 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:48.548 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:48.548 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:48.548 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:48.548 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:48.548 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:48.548 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:48.806 [2024-07-25 10:27:52.387372] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.806 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:48.806 "name": "raid_bdev1", 00:12:48.806 "aliases": [ 00:12:48.806 "bdba6ffa-c678-4d40-b158-5490e1085e59" 00:12:48.806 ], 00:12:48.806 "product_name": "Raid Volume", 00:12:48.806 "block_size": 512, 00:12:48.806 "num_blocks": 190464, 00:12:48.806 "uuid": "bdba6ffa-c678-4d40-b158-5490e1085e59", 00:12:48.806 "assigned_rate_limits": { 00:12:48.806 "rw_ios_per_sec": 0, 00:12:48.806 "rw_mbytes_per_sec": 0, 00:12:48.806 "r_mbytes_per_sec": 0, 00:12:48.806 "w_mbytes_per_sec": 0 00:12:48.806 }, 00:12:48.806 "claimed": false, 00:12:48.806 "zoned": false, 00:12:48.806 "supported_io_types": { 00:12:48.806 "read": true, 00:12:48.806 "write": true, 00:12:48.806 "unmap": true, 00:12:48.806 "flush": true, 00:12:48.806 "reset": true, 00:12:48.806 "nvme_admin": false, 00:12:48.806 "nvme_io": false, 00:12:48.806 "nvme_io_md": false, 00:12:48.806 "write_zeroes": true, 00:12:48.806 "zcopy": false, 00:12:48.806 "get_zone_info": false, 00:12:48.806 "zone_management": false, 00:12:48.806 "zone_append": false, 00:12:48.806 "compare": false, 00:12:48.806 "compare_and_write": false, 00:12:48.806 "abort": false, 00:12:48.806 "seek_hole": false, 00:12:48.806 "seek_data": false, 00:12:48.806 "copy": false, 00:12:48.806 "nvme_iov_md": false 00:12:48.806 }, 00:12:48.806 "memory_domains": [ 00:12:48.806 { 00:12:48.806 "dma_device_id": "system", 00:12:48.806 "dma_device_type": 1 00:12:48.806 }, 00:12:48.806 { 00:12:48.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.806 "dma_device_type": 2 00:12:48.806 }, 00:12:48.806 { 00:12:48.806 "dma_device_id": "system", 00:12:48.806 "dma_device_type": 1 00:12:48.806 }, 00:12:48.806 { 00:12:48.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.806 "dma_device_type": 2 00:12:48.806 }, 00:12:48.806 { 00:12:48.806 "dma_device_id": "system", 00:12:48.806 "dma_device_type": 1 00:12:48.806 }, 00:12:48.806 { 00:12:48.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.806 "dma_device_type": 2 00:12:48.806 } 00:12:48.806 ], 00:12:48.806 "driver_specific": { 00:12:48.806 "raid": { 00:12:48.806 "uuid": "bdba6ffa-c678-4d40-b158-5490e1085e59", 00:12:48.806 "strip_size_kb": 64, 00:12:48.806 "state": "online", 00:12:48.806 "raid_level": "raid0", 00:12:48.806 "superblock": true, 00:12:48.806 "num_base_bdevs": 3, 00:12:48.806 "num_base_bdevs_discovered": 3, 00:12:48.806 "num_base_bdevs_operational": 3, 00:12:48.806 "base_bdevs_list": [ 00:12:48.806 { 00:12:48.806 "name": "pt1", 00:12:48.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:48.806 "is_configured": true, 00:12:48.806 "data_offset": 2048, 00:12:48.806 "data_size": 63488 00:12:48.806 }, 00:12:48.806 { 00:12:48.806 "name": "pt2", 00:12:48.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.806 "is_configured": true, 00:12:48.806 "data_offset": 2048, 00:12:48.806 "data_size": 63488 00:12:48.806 }, 00:12:48.806 { 00:12:48.806 "name": "pt3", 00:12:48.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:48.806 "is_configured": true, 00:12:48.806 "data_offset": 2048, 00:12:48.806 "data_size": 63488 00:12:48.806 } 00:12:48.806 ] 00:12:48.806 } 00:12:48.806 } 00:12:48.806 }' 00:12:48.806 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:48.806 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:48.806 pt2 00:12:48.806 pt3' 00:12:48.806 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:48.806 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:48.806 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:49.064 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:49.064 "name": "pt1", 00:12:49.064 "aliases": [ 00:12:49.064 "00000000-0000-0000-0000-000000000001" 00:12:49.064 ], 00:12:49.064 "product_name": "passthru", 00:12:49.064 "block_size": 512, 00:12:49.064 "num_blocks": 65536, 00:12:49.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:49.064 "assigned_rate_limits": { 00:12:49.064 "rw_ios_per_sec": 0, 00:12:49.064 "rw_mbytes_per_sec": 0, 00:12:49.064 "r_mbytes_per_sec": 0, 00:12:49.064 "w_mbytes_per_sec": 0 00:12:49.064 }, 00:12:49.064 "claimed": true, 00:12:49.064 "claim_type": "exclusive_write", 00:12:49.064 "zoned": false, 00:12:49.064 "supported_io_types": { 00:12:49.064 "read": true, 00:12:49.064 "write": true, 00:12:49.064 "unmap": true, 00:12:49.064 "flush": true, 00:12:49.064 "reset": true, 00:12:49.064 "nvme_admin": false, 00:12:49.064 "nvme_io": false, 00:12:49.064 "nvme_io_md": false, 00:12:49.064 "write_zeroes": true, 00:12:49.064 "zcopy": true, 00:12:49.064 "get_zone_info": false, 00:12:49.064 "zone_management": false, 00:12:49.064 "zone_append": false, 00:12:49.064 "compare": false, 00:12:49.064 "compare_and_write": false, 00:12:49.064 "abort": true, 00:12:49.064 "seek_hole": false, 00:12:49.064 "seek_data": false, 00:12:49.064 "copy": true, 00:12:49.064 "nvme_iov_md": false 00:12:49.064 }, 00:12:49.064 "memory_domains": [ 00:12:49.064 { 00:12:49.064 "dma_device_id": "system", 00:12:49.064 "dma_device_type": 1 00:12:49.064 }, 00:12:49.064 { 00:12:49.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.064 "dma_device_type": 2 00:12:49.064 } 00:12:49.064 ], 00:12:49.064 "driver_specific": { 00:12:49.064 "passthru": { 00:12:49.064 "name": "pt1", 00:12:49.064 "base_bdev_name": "malloc1" 00:12:49.064 } 00:12:49.064 } 00:12:49.064 }' 00:12:49.064 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:49.064 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:49.322 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:49.322 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:49.322 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:49.322 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:49.322 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:49.322 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:49.322 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:49.322 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:49.322 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:49.322 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:49.322 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:49.322 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:49.322 10:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:49.580 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:49.580 "name": "pt2", 00:12:49.580 "aliases": [ 00:12:49.580 "00000000-0000-0000-0000-000000000002" 00:12:49.580 ], 00:12:49.580 "product_name": "passthru", 00:12:49.580 "block_size": 512, 00:12:49.580 "num_blocks": 65536, 00:12:49.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:49.580 "assigned_rate_limits": { 00:12:49.580 "rw_ios_per_sec": 0, 00:12:49.580 "rw_mbytes_per_sec": 0, 00:12:49.580 "r_mbytes_per_sec": 0, 00:12:49.580 "w_mbytes_per_sec": 0 00:12:49.580 }, 00:12:49.580 "claimed": true, 00:12:49.580 "claim_type": "exclusive_write", 00:12:49.580 "zoned": false, 00:12:49.580 "supported_io_types": { 00:12:49.580 "read": true, 00:12:49.580 "write": true, 00:12:49.580 "unmap": true, 00:12:49.580 "flush": true, 00:12:49.580 "reset": true, 00:12:49.580 "nvme_admin": false, 00:12:49.580 "nvme_io": false, 00:12:49.580 "nvme_io_md": false, 00:12:49.580 "write_zeroes": true, 00:12:49.580 "zcopy": true, 00:12:49.580 "get_zone_info": false, 00:12:49.580 "zone_management": false, 00:12:49.580 "zone_append": false, 00:12:49.580 "compare": false, 00:12:49.580 "compare_and_write": false, 00:12:49.580 "abort": true, 00:12:49.581 "seek_hole": false, 00:12:49.581 "seek_data": false, 00:12:49.581 "copy": true, 00:12:49.581 "nvme_iov_md": false 00:12:49.581 }, 00:12:49.581 "memory_domains": [ 00:12:49.581 { 00:12:49.581 "dma_device_id": "system", 00:12:49.581 "dma_device_type": 1 00:12:49.581 }, 00:12:49.581 { 00:12:49.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.581 "dma_device_type": 2 00:12:49.581 } 00:12:49.581 ], 00:12:49.581 "driver_specific": { 00:12:49.581 "passthru": { 00:12:49.581 "name": "pt2", 00:12:49.581 "base_bdev_name": "malloc2" 00:12:49.581 } 00:12:49.581 } 00:12:49.581 }' 00:12:49.581 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:49.581 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:49.838 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:49.838 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:49.838 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:49.838 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:49.838 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:49.838 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:49.838 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:49.838 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:49.838 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:49.838 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:49.838 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:49.838 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:49.838 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:50.095 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:50.095 "name": "pt3", 00:12:50.095 "aliases": [ 00:12:50.095 "00000000-0000-0000-0000-000000000003" 00:12:50.095 ], 00:12:50.095 "product_name": "passthru", 00:12:50.095 "block_size": 512, 00:12:50.095 "num_blocks": 65536, 00:12:50.095 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:50.095 "assigned_rate_limits": { 00:12:50.095 "rw_ios_per_sec": 0, 00:12:50.095 "rw_mbytes_per_sec": 0, 00:12:50.095 "r_mbytes_per_sec": 0, 00:12:50.095 "w_mbytes_per_sec": 0 00:12:50.095 }, 00:12:50.095 "claimed": true, 00:12:50.095 "claim_type": "exclusive_write", 00:12:50.095 "zoned": false, 00:12:50.095 "supported_io_types": { 00:12:50.095 "read": true, 00:12:50.095 "write": true, 00:12:50.095 "unmap": true, 00:12:50.095 "flush": true, 00:12:50.095 "reset": true, 00:12:50.095 "nvme_admin": false, 00:12:50.095 "nvme_io": false, 00:12:50.095 "nvme_io_md": false, 00:12:50.095 "write_zeroes": true, 00:12:50.095 "zcopy": true, 00:12:50.095 "get_zone_info": false, 00:12:50.095 "zone_management": false, 00:12:50.095 "zone_append": false, 00:12:50.095 "compare": false, 00:12:50.095 "compare_and_write": false, 00:12:50.095 "abort": true, 00:12:50.095 "seek_hole": false, 00:12:50.095 "seek_data": false, 00:12:50.095 "copy": true, 00:12:50.095 "nvme_iov_md": false 00:12:50.095 }, 00:12:50.095 "memory_domains": [ 00:12:50.095 { 00:12:50.095 "dma_device_id": "system", 00:12:50.095 "dma_device_type": 1 00:12:50.095 }, 00:12:50.095 { 00:12:50.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.095 "dma_device_type": 2 00:12:50.095 } 00:12:50.095 ], 00:12:50.095 "driver_specific": { 00:12:50.095 "passthru": { 00:12:50.095 "name": "pt3", 00:12:50.095 "base_bdev_name": "malloc3" 00:12:50.095 } 00:12:50.095 } 00:12:50.095 }' 00:12:50.095 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.352 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.352 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:50.352 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.352 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.352 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:50.352 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.352 10:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.353 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:50.353 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.353 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.610 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:50.610 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:50.610 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:50.867 [2024-07-25 10:27:54.320503] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.867 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=bdba6ffa-c678-4d40-b158-5490e1085e59 00:12:50.867 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z bdba6ffa-c678-4d40-b158-5490e1085e59 ']' 00:12:50.867 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:50.867 [2024-07-25 10:27:54.560833] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.867 [2024-07-25 10:27:54.560857] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.867 [2024-07-25 10:27:54.560931] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.867 [2024-07-25 10:27:54.560995] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.867 [2024-07-25 10:27:54.561008] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27db120 name raid_bdev1, state offline 00:12:51.125 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.125 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:51.125 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:51.125 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:51.125 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:51.125 10:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:51.382 10:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:51.382 10:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:51.946 10:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:51.946 10:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:51.946 10:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:51.946 10:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:52.512 10:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:52.512 10:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:52.512 10:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:52.512 10:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:52.512 10:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:52.512 10:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.512 10:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:52.512 10:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.512 10:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:52.512 10:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.512 10:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:52.512 10:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:52.512 10:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:52.512 [2024-07-25 10:27:56.152999] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:52.512 [2024-07-25 10:27:56.154256] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:52.512 [2024-07-25 10:27:56.154301] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:52.512 [2024-07-25 10:27:56.154357] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:52.512 [2024-07-25 10:27:56.154420] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:52.512 [2024-07-25 10:27:56.154449] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:52.512 [2024-07-25 10:27:56.154484] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:52.512 [2024-07-25 10:27:56.154494] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27e6170 name raid_bdev1, state configuring 00:12:52.512 request: 00:12:52.512 { 00:12:52.512 "name": "raid_bdev1", 00:12:52.512 "raid_level": "raid0", 00:12:52.512 "base_bdevs": [ 00:12:52.512 "malloc1", 00:12:52.512 "malloc2", 00:12:52.512 "malloc3" 00:12:52.512 ], 00:12:52.513 "strip_size_kb": 64, 00:12:52.513 "superblock": false, 00:12:52.513 "method": "bdev_raid_create", 00:12:52.513 "req_id": 1 00:12:52.513 } 00:12:52.513 Got JSON-RPC error response 00:12:52.513 response: 00:12:52.513 { 00:12:52.513 "code": -17, 00:12:52.513 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:52.513 } 00:12:52.513 10:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:52.513 10:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:52.513 10:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:52.513 10:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:52.513 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.513 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:52.771 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:52.771 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:52.771 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:53.029 [2024-07-25 10:27:56.642218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:53.029 [2024-07-25 10:27:56.642279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.029 [2024-07-25 10:27:56.642303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27dc700 00:12:53.029 [2024-07-25 10:27:56.642318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.029 [2024-07-25 10:27:56.644100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.029 [2024-07-25 10:27:56.644138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:53.029 [2024-07-25 10:27:56.644220] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:53.029 [2024-07-25 10:27:56.644256] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:53.029 pt1 00:12:53.029 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:53.029 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:53.029 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:53.029 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:53.029 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:53.029 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:53.029 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:53.029 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:53.029 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:53.029 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:53.029 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.029 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.287 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:53.287 "name": "raid_bdev1", 00:12:53.287 "uuid": "bdba6ffa-c678-4d40-b158-5490e1085e59", 00:12:53.287 "strip_size_kb": 64, 00:12:53.287 "state": "configuring", 00:12:53.287 "raid_level": "raid0", 00:12:53.287 "superblock": true, 00:12:53.287 "num_base_bdevs": 3, 00:12:53.287 "num_base_bdevs_discovered": 1, 00:12:53.287 "num_base_bdevs_operational": 3, 00:12:53.287 "base_bdevs_list": [ 00:12:53.287 { 00:12:53.287 "name": "pt1", 00:12:53.287 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:53.287 "is_configured": true, 00:12:53.287 "data_offset": 2048, 00:12:53.287 "data_size": 63488 00:12:53.287 }, 00:12:53.287 { 00:12:53.287 "name": null, 00:12:53.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:53.287 "is_configured": false, 00:12:53.287 "data_offset": 2048, 00:12:53.287 "data_size": 63488 00:12:53.287 }, 00:12:53.287 { 00:12:53.287 "name": null, 00:12:53.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:53.287 "is_configured": false, 00:12:53.287 "data_offset": 2048, 00:12:53.287 "data_size": 63488 00:12:53.287 } 00:12:53.287 ] 00:12:53.287 }' 00:12:53.287 10:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:53.287 10:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.853 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:12:53.853 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:54.112 [2024-07-25 10:27:57.737161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:54.112 [2024-07-25 10:27:57.737226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.112 [2024-07-25 10:27:57.737252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27e63f0 00:12:54.112 [2024-07-25 10:27:57.737269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.112 [2024-07-25 10:27:57.737695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.112 [2024-07-25 10:27:57.737720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:54.112 [2024-07-25 10:27:57.737818] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:54.112 [2024-07-25 10:27:57.737847] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:54.112 pt2 00:12:54.112 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:54.370 [2024-07-25 10:27:58.021945] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:54.370 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:54.370 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:54.370 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:54.370 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:54.370 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:54.370 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:54.370 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:54.370 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:54.370 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:54.370 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:54.370 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.370 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.628 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.628 "name": "raid_bdev1", 00:12:54.628 "uuid": "bdba6ffa-c678-4d40-b158-5490e1085e59", 00:12:54.628 "strip_size_kb": 64, 00:12:54.628 "state": "configuring", 00:12:54.628 "raid_level": "raid0", 00:12:54.628 "superblock": true, 00:12:54.628 "num_base_bdevs": 3, 00:12:54.628 "num_base_bdevs_discovered": 1, 00:12:54.628 "num_base_bdevs_operational": 3, 00:12:54.628 "base_bdevs_list": [ 00:12:54.628 { 00:12:54.628 "name": "pt1", 00:12:54.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:54.628 "is_configured": true, 00:12:54.628 "data_offset": 2048, 00:12:54.628 "data_size": 63488 00:12:54.628 }, 00:12:54.628 { 00:12:54.628 "name": null, 00:12:54.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:54.628 "is_configured": false, 00:12:54.628 "data_offset": 2048, 00:12:54.628 "data_size": 63488 00:12:54.628 }, 00:12:54.628 { 00:12:54.628 "name": null, 00:12:54.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:54.628 "is_configured": false, 00:12:54.628 "data_offset": 2048, 00:12:54.628 "data_size": 63488 00:12:54.628 } 00:12:54.628 ] 00:12:54.628 }' 00:12:54.628 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.628 10:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.561 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:55.561 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:55.562 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:55.562 [2024-07-25 10:27:59.148889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:55.562 [2024-07-25 10:27:59.148951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.562 [2024-07-25 10:27:59.148977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27dd580 00:12:55.562 [2024-07-25 10:27:59.148993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.562 [2024-07-25 10:27:59.149419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.562 [2024-07-25 10:27:59.149445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:55.562 [2024-07-25 10:27:59.149544] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:55.562 [2024-07-25 10:27:59.149573] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:55.562 pt2 00:12:55.562 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:55.562 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:55.562 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:55.820 [2024-07-25 10:27:59.397558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:55.820 [2024-07-25 10:27:59.397626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.820 [2024-07-25 10:27:59.397649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27dd290 00:12:55.820 [2024-07-25 10:27:59.397662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.820 [2024-07-25 10:27:59.398041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.820 [2024-07-25 10:27:59.398063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:55.820 [2024-07-25 10:27:59.398168] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:55.820 [2024-07-25 10:27:59.398194] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:55.820 [2024-07-25 10:27:59.398315] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x263af00 00:12:55.820 [2024-07-25 10:27:59.398329] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:55.820 [2024-07-25 10:27:59.398506] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x263b5c0 00:12:55.820 [2024-07-25 10:27:59.398627] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x263af00 00:12:55.820 [2024-07-25 10:27:59.398640] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x263af00 00:12:55.820 [2024-07-25 10:27:59.398730] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.820 pt3 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.820 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.078 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.078 "name": "raid_bdev1", 00:12:56.078 "uuid": "bdba6ffa-c678-4d40-b158-5490e1085e59", 00:12:56.078 "strip_size_kb": 64, 00:12:56.078 "state": "online", 00:12:56.078 "raid_level": "raid0", 00:12:56.078 "superblock": true, 00:12:56.078 "num_base_bdevs": 3, 00:12:56.078 "num_base_bdevs_discovered": 3, 00:12:56.078 "num_base_bdevs_operational": 3, 00:12:56.078 "base_bdevs_list": [ 00:12:56.078 { 00:12:56.078 "name": "pt1", 00:12:56.078 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.078 "is_configured": true, 00:12:56.078 "data_offset": 2048, 00:12:56.078 "data_size": 63488 00:12:56.078 }, 00:12:56.078 { 00:12:56.078 "name": "pt2", 00:12:56.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.078 "is_configured": true, 00:12:56.078 "data_offset": 2048, 00:12:56.078 "data_size": 63488 00:12:56.078 }, 00:12:56.078 { 00:12:56.078 "name": "pt3", 00:12:56.078 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.078 "is_configured": true, 00:12:56.078 "data_offset": 2048, 00:12:56.078 "data_size": 63488 00:12:56.078 } 00:12:56.078 ] 00:12:56.078 }' 00:12:56.078 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.078 10:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.644 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:56.644 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:56.644 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:56.644 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:56.644 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:56.644 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:56.644 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:56.644 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:56.902 [2024-07-25 10:28:00.440660] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.902 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:56.902 "name": "raid_bdev1", 00:12:56.902 "aliases": [ 00:12:56.902 "bdba6ffa-c678-4d40-b158-5490e1085e59" 00:12:56.902 ], 00:12:56.902 "product_name": "Raid Volume", 00:12:56.902 "block_size": 512, 00:12:56.902 "num_blocks": 190464, 00:12:56.902 "uuid": "bdba6ffa-c678-4d40-b158-5490e1085e59", 00:12:56.902 "assigned_rate_limits": { 00:12:56.902 "rw_ios_per_sec": 0, 00:12:56.902 "rw_mbytes_per_sec": 0, 00:12:56.902 "r_mbytes_per_sec": 0, 00:12:56.902 "w_mbytes_per_sec": 0 00:12:56.902 }, 00:12:56.902 "claimed": false, 00:12:56.902 "zoned": false, 00:12:56.902 "supported_io_types": { 00:12:56.902 "read": true, 00:12:56.902 "write": true, 00:12:56.902 "unmap": true, 00:12:56.902 "flush": true, 00:12:56.902 "reset": true, 00:12:56.902 "nvme_admin": false, 00:12:56.902 "nvme_io": false, 00:12:56.902 "nvme_io_md": false, 00:12:56.902 "write_zeroes": true, 00:12:56.902 "zcopy": false, 00:12:56.902 "get_zone_info": false, 00:12:56.902 "zone_management": false, 00:12:56.902 "zone_append": false, 00:12:56.902 "compare": false, 00:12:56.902 "compare_and_write": false, 00:12:56.902 "abort": false, 00:12:56.902 "seek_hole": false, 00:12:56.902 "seek_data": false, 00:12:56.902 "copy": false, 00:12:56.902 "nvme_iov_md": false 00:12:56.902 }, 00:12:56.902 "memory_domains": [ 00:12:56.902 { 00:12:56.902 "dma_device_id": "system", 00:12:56.902 "dma_device_type": 1 00:12:56.902 }, 00:12:56.902 { 00:12:56.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.902 "dma_device_type": 2 00:12:56.902 }, 00:12:56.902 { 00:12:56.902 "dma_device_id": "system", 00:12:56.902 "dma_device_type": 1 00:12:56.902 }, 00:12:56.902 { 00:12:56.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.902 "dma_device_type": 2 00:12:56.902 }, 00:12:56.902 { 00:12:56.902 "dma_device_id": "system", 00:12:56.902 "dma_device_type": 1 00:12:56.902 }, 00:12:56.902 { 00:12:56.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.902 "dma_device_type": 2 00:12:56.902 } 00:12:56.902 ], 00:12:56.902 "driver_specific": { 00:12:56.902 "raid": { 00:12:56.902 "uuid": "bdba6ffa-c678-4d40-b158-5490e1085e59", 00:12:56.902 "strip_size_kb": 64, 00:12:56.902 "state": "online", 00:12:56.902 "raid_level": "raid0", 00:12:56.902 "superblock": true, 00:12:56.902 "num_base_bdevs": 3, 00:12:56.902 "num_base_bdevs_discovered": 3, 00:12:56.902 "num_base_bdevs_operational": 3, 00:12:56.902 "base_bdevs_list": [ 00:12:56.902 { 00:12:56.902 "name": "pt1", 00:12:56.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.902 "is_configured": true, 00:12:56.902 "data_offset": 2048, 00:12:56.902 "data_size": 63488 00:12:56.902 }, 00:12:56.902 { 00:12:56.902 "name": "pt2", 00:12:56.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.902 "is_configured": true, 00:12:56.902 "data_offset": 2048, 00:12:56.902 "data_size": 63488 00:12:56.902 }, 00:12:56.902 { 00:12:56.902 "name": "pt3", 00:12:56.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.902 "is_configured": true, 00:12:56.902 "data_offset": 2048, 00:12:56.902 "data_size": 63488 00:12:56.902 } 00:12:56.902 ] 00:12:56.902 } 00:12:56.902 } 00:12:56.902 }' 00:12:56.902 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:56.902 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:56.902 pt2 00:12:56.902 pt3' 00:12:56.902 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:56.902 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:56.902 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:57.160 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:57.160 "name": "pt1", 00:12:57.160 "aliases": [ 00:12:57.160 "00000000-0000-0000-0000-000000000001" 00:12:57.160 ], 00:12:57.160 "product_name": "passthru", 00:12:57.160 "block_size": 512, 00:12:57.160 "num_blocks": 65536, 00:12:57.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.160 "assigned_rate_limits": { 00:12:57.160 "rw_ios_per_sec": 0, 00:12:57.160 "rw_mbytes_per_sec": 0, 00:12:57.160 "r_mbytes_per_sec": 0, 00:12:57.160 "w_mbytes_per_sec": 0 00:12:57.160 }, 00:12:57.160 "claimed": true, 00:12:57.160 "claim_type": "exclusive_write", 00:12:57.160 "zoned": false, 00:12:57.160 "supported_io_types": { 00:12:57.160 "read": true, 00:12:57.160 "write": true, 00:12:57.160 "unmap": true, 00:12:57.160 "flush": true, 00:12:57.160 "reset": true, 00:12:57.160 "nvme_admin": false, 00:12:57.160 "nvme_io": false, 00:12:57.160 "nvme_io_md": false, 00:12:57.160 "write_zeroes": true, 00:12:57.160 "zcopy": true, 00:12:57.160 "get_zone_info": false, 00:12:57.160 "zone_management": false, 00:12:57.160 "zone_append": false, 00:12:57.160 "compare": false, 00:12:57.160 "compare_and_write": false, 00:12:57.160 "abort": true, 00:12:57.160 "seek_hole": false, 00:12:57.160 "seek_data": false, 00:12:57.160 "copy": true, 00:12:57.160 "nvme_iov_md": false 00:12:57.160 }, 00:12:57.160 "memory_domains": [ 00:12:57.160 { 00:12:57.160 "dma_device_id": "system", 00:12:57.160 "dma_device_type": 1 00:12:57.160 }, 00:12:57.160 { 00:12:57.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.160 "dma_device_type": 2 00:12:57.160 } 00:12:57.160 ], 00:12:57.160 "driver_specific": { 00:12:57.160 "passthru": { 00:12:57.160 "name": "pt1", 00:12:57.160 "base_bdev_name": "malloc1" 00:12:57.160 } 00:12:57.160 } 00:12:57.160 }' 00:12:57.160 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:57.160 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:57.160 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:57.160 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:57.160 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:57.455 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:57.455 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:57.455 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:57.455 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:57.455 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:57.455 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:57.455 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:57.455 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:57.455 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:57.455 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:57.713 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:57.713 "name": "pt2", 00:12:57.713 "aliases": [ 00:12:57.713 "00000000-0000-0000-0000-000000000002" 00:12:57.713 ], 00:12:57.713 "product_name": "passthru", 00:12:57.713 "block_size": 512, 00:12:57.713 "num_blocks": 65536, 00:12:57.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.713 "assigned_rate_limits": { 00:12:57.713 "rw_ios_per_sec": 0, 00:12:57.713 "rw_mbytes_per_sec": 0, 00:12:57.713 "r_mbytes_per_sec": 0, 00:12:57.713 "w_mbytes_per_sec": 0 00:12:57.713 }, 00:12:57.713 "claimed": true, 00:12:57.713 "claim_type": "exclusive_write", 00:12:57.713 "zoned": false, 00:12:57.713 "supported_io_types": { 00:12:57.713 "read": true, 00:12:57.713 "write": true, 00:12:57.713 "unmap": true, 00:12:57.713 "flush": true, 00:12:57.713 "reset": true, 00:12:57.713 "nvme_admin": false, 00:12:57.713 "nvme_io": false, 00:12:57.713 "nvme_io_md": false, 00:12:57.713 "write_zeroes": true, 00:12:57.713 "zcopy": true, 00:12:57.713 "get_zone_info": false, 00:12:57.713 "zone_management": false, 00:12:57.713 "zone_append": false, 00:12:57.713 "compare": false, 00:12:57.713 "compare_and_write": false, 00:12:57.713 "abort": true, 00:12:57.713 "seek_hole": false, 00:12:57.713 "seek_data": false, 00:12:57.713 "copy": true, 00:12:57.713 "nvme_iov_md": false 00:12:57.713 }, 00:12:57.713 "memory_domains": [ 00:12:57.713 { 00:12:57.713 "dma_device_id": "system", 00:12:57.713 "dma_device_type": 1 00:12:57.713 }, 00:12:57.713 { 00:12:57.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.713 "dma_device_type": 2 00:12:57.713 } 00:12:57.713 ], 00:12:57.713 "driver_specific": { 00:12:57.713 "passthru": { 00:12:57.713 "name": "pt2", 00:12:57.713 "base_bdev_name": "malloc2" 00:12:57.713 } 00:12:57.713 } 00:12:57.713 }' 00:12:57.713 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:57.713 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:57.713 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:57.713 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:57.713 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:57.713 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:57.713 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:57.713 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:57.971 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:57.971 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:57.971 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:57.971 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:57.971 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:57.971 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:57.971 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.229 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.229 "name": "pt3", 00:12:58.229 "aliases": [ 00:12:58.229 "00000000-0000-0000-0000-000000000003" 00:12:58.229 ], 00:12:58.229 "product_name": "passthru", 00:12:58.229 "block_size": 512, 00:12:58.229 "num_blocks": 65536, 00:12:58.229 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.229 "assigned_rate_limits": { 00:12:58.229 "rw_ios_per_sec": 0, 00:12:58.229 "rw_mbytes_per_sec": 0, 00:12:58.229 "r_mbytes_per_sec": 0, 00:12:58.229 "w_mbytes_per_sec": 0 00:12:58.229 }, 00:12:58.229 "claimed": true, 00:12:58.229 "claim_type": "exclusive_write", 00:12:58.229 "zoned": false, 00:12:58.229 "supported_io_types": { 00:12:58.229 "read": true, 00:12:58.229 "write": true, 00:12:58.229 "unmap": true, 00:12:58.229 "flush": true, 00:12:58.229 "reset": true, 00:12:58.229 "nvme_admin": false, 00:12:58.229 "nvme_io": false, 00:12:58.229 "nvme_io_md": false, 00:12:58.229 "write_zeroes": true, 00:12:58.229 "zcopy": true, 00:12:58.229 "get_zone_info": false, 00:12:58.229 "zone_management": false, 00:12:58.229 "zone_append": false, 00:12:58.229 "compare": false, 00:12:58.229 "compare_and_write": false, 00:12:58.229 "abort": true, 00:12:58.229 "seek_hole": false, 00:12:58.229 "seek_data": false, 00:12:58.229 "copy": true, 00:12:58.229 "nvme_iov_md": false 00:12:58.229 }, 00:12:58.229 "memory_domains": [ 00:12:58.229 { 00:12:58.229 "dma_device_id": "system", 00:12:58.229 "dma_device_type": 1 00:12:58.229 }, 00:12:58.229 { 00:12:58.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.229 "dma_device_type": 2 00:12:58.229 } 00:12:58.229 ], 00:12:58.229 "driver_specific": { 00:12:58.229 "passthru": { 00:12:58.229 "name": "pt3", 00:12:58.229 "base_bdev_name": "malloc3" 00:12:58.229 } 00:12:58.229 } 00:12:58.229 }' 00:12:58.229 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.229 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.229 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.229 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.229 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.229 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.229 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.229 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.487 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.487 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.487 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.487 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.487 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:58.487 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:58.745 [2024-07-25 10:28:02.257507] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' bdba6ffa-c678-4d40-b158-5490e1085e59 '!=' bdba6ffa-c678-4d40-b158-5490e1085e59 ']' 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2359442 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2359442 ']' 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2359442 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2359442 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2359442' 00:12:58.745 killing process with pid 2359442 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2359442 00:12:58.745 [2024-07-25 10:28:02.300652] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.745 10:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2359442 00:12:58.745 [2024-07-25 10:28:02.300721] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.745 [2024-07-25 10:28:02.300779] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.745 [2024-07-25 10:28:02.300792] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x263af00 name raid_bdev1, state offline 00:12:58.745 [2024-07-25 10:28:02.333238] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.004 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:59.004 00:12:59.004 real 0m14.005s 00:12:59.004 user 0m25.675s 00:12:59.004 sys 0m1.917s 00:12:59.004 10:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.004 10:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.004 ************************************ 00:12:59.004 END TEST raid_superblock_test 00:12:59.004 ************************************ 00:12:59.004 10:28:02 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:12:59.004 10:28:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:59.004 10:28:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.004 10:28:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.004 ************************************ 00:12:59.004 START TEST raid_read_error_test 00:12:59.004 ************************************ 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.QUVJaS5Qvt 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2361450 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2361450 /var/tmp/spdk-raid.sock 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2361450 ']' 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:59.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.004 10:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.004 [2024-07-25 10:28:02.694620] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:12:59.004 [2024-07-25 10:28:02.694703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361450 ] 00:12:59.263 [2024-07-25 10:28:02.797097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.263 [2024-07-25 10:28:02.939303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.521 [2024-07-25 10:28:03.011973] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.521 [2024-07-25 10:28:03.012004] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.521 10:28:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:59.521 10:28:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:59.521 10:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:59.521 10:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:59.779 BaseBdev1_malloc 00:12:59.779 10:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:00.037 true 00:13:00.037 10:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:00.294 [2024-07-25 10:28:03.872219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:00.294 [2024-07-25 10:28:03.872282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.294 [2024-07-25 10:28:03.872312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18a5250 00:13:00.294 [2024-07-25 10:28:03.872328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.294 [2024-07-25 10:28:03.874290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.294 [2024-07-25 10:28:03.874319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:00.294 BaseBdev1 00:13:00.294 10:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:00.295 10:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:00.552 BaseBdev2_malloc 00:13:00.552 10:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:00.811 true 00:13:00.811 10:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:01.069 [2024-07-25 10:28:04.694546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:01.069 [2024-07-25 10:28:04.694606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.069 [2024-07-25 10:28:04.694633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1894650 00:13:01.069 [2024-07-25 10:28:04.694648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.069 [2024-07-25 10:28:04.696404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.069 [2024-07-25 10:28:04.696431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:01.069 BaseBdev2 00:13:01.069 10:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:01.069 10:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:01.328 BaseBdev3_malloc 00:13:01.328 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:01.585 true 00:13:01.585 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:01.844 [2024-07-25 10:28:05.488922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:01.844 [2024-07-25 10:28:05.488993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.844 [2024-07-25 10:28:05.489020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x188a5d0 00:13:01.844 [2024-07-25 10:28:05.489034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.844 [2024-07-25 10:28:05.490879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.844 [2024-07-25 10:28:05.490908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:01.844 BaseBdev3 00:13:01.844 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:02.103 [2024-07-25 10:28:05.737652] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.103 [2024-07-25 10:28:05.738955] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.103 [2024-07-25 10:28:05.739029] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:02.103 [2024-07-25 10:28:05.739280] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x16e96b0 00:13:02.103 [2024-07-25 10:28:05.739298] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:02.103 [2024-07-25 10:28:05.739520] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16e9b00 00:13:02.103 [2024-07-25 10:28:05.739683] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16e96b0 00:13:02.103 [2024-07-25 10:28:05.739697] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x16e96b0 00:13:02.103 [2024-07-25 10:28:05.739835] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.103 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:02.103 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:02.103 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:02.103 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:02.103 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:02.103 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:02.103 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.103 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.103 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.103 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.103 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.103 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.361 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:02.361 "name": "raid_bdev1", 00:13:02.361 "uuid": "97e0fde4-77d7-4ffb-8c08-8e31b6f654f2", 00:13:02.361 "strip_size_kb": 64, 00:13:02.361 "state": "online", 00:13:02.361 "raid_level": "raid0", 00:13:02.361 "superblock": true, 00:13:02.361 "num_base_bdevs": 3, 00:13:02.361 "num_base_bdevs_discovered": 3, 00:13:02.361 "num_base_bdevs_operational": 3, 00:13:02.361 "base_bdevs_list": [ 00:13:02.361 { 00:13:02.361 "name": "BaseBdev1", 00:13:02.361 "uuid": "30fd49a5-1439-54bc-bca5-da43d6df5ff2", 00:13:02.361 "is_configured": true, 00:13:02.361 "data_offset": 2048, 00:13:02.361 "data_size": 63488 00:13:02.361 }, 00:13:02.361 { 00:13:02.361 "name": "BaseBdev2", 00:13:02.361 "uuid": "9aed145a-693f-5c57-b712-0cfa2d3194be", 00:13:02.361 "is_configured": true, 00:13:02.361 "data_offset": 2048, 00:13:02.361 "data_size": 63488 00:13:02.361 }, 00:13:02.361 { 00:13:02.361 "name": "BaseBdev3", 00:13:02.361 "uuid": "6b38f74c-ff65-5103-862f-1ac3753467e7", 00:13:02.361 "is_configured": true, 00:13:02.361 "data_offset": 2048, 00:13:02.361 "data_size": 63488 00:13:02.361 } 00:13:02.361 ] 00:13:02.361 }' 00:13:02.361 10:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:02.361 10:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.927 10:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:02.927 10:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:03.184 [2024-07-25 10:28:06.648467] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16f2d30 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.119 10:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.378 10:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.378 "name": "raid_bdev1", 00:13:04.378 "uuid": "97e0fde4-77d7-4ffb-8c08-8e31b6f654f2", 00:13:04.378 "strip_size_kb": 64, 00:13:04.378 "state": "online", 00:13:04.378 "raid_level": "raid0", 00:13:04.378 "superblock": true, 00:13:04.378 "num_base_bdevs": 3, 00:13:04.378 "num_base_bdevs_discovered": 3, 00:13:04.378 "num_base_bdevs_operational": 3, 00:13:04.378 "base_bdevs_list": [ 00:13:04.378 { 00:13:04.378 "name": "BaseBdev1", 00:13:04.378 "uuid": "30fd49a5-1439-54bc-bca5-da43d6df5ff2", 00:13:04.378 "is_configured": true, 00:13:04.378 "data_offset": 2048, 00:13:04.378 "data_size": 63488 00:13:04.378 }, 00:13:04.378 { 00:13:04.378 "name": "BaseBdev2", 00:13:04.378 "uuid": "9aed145a-693f-5c57-b712-0cfa2d3194be", 00:13:04.378 "is_configured": true, 00:13:04.378 "data_offset": 2048, 00:13:04.378 "data_size": 63488 00:13:04.378 }, 00:13:04.378 { 00:13:04.378 "name": "BaseBdev3", 00:13:04.378 "uuid": "6b38f74c-ff65-5103-862f-1ac3753467e7", 00:13:04.378 "is_configured": true, 00:13:04.378 "data_offset": 2048, 00:13:04.378 "data_size": 63488 00:13:04.378 } 00:13:04.378 ] 00:13:04.378 }' 00:13:04.378 10:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.378 10:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.944 10:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:05.202 [2024-07-25 10:28:08.850996] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.202 [2024-07-25 10:28:08.851051] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.202 [2024-07-25 10:28:08.854022] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.202 [2024-07-25 10:28:08.854063] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.202 [2024-07-25 10:28:08.854113] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.202 [2024-07-25 10:28:08.854130] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16e96b0 name raid_bdev1, state offline 00:13:05.202 0 00:13:05.202 10:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2361450 00:13:05.202 10:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2361450 ']' 00:13:05.202 10:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2361450 00:13:05.202 10:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:05.203 10:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:05.203 10:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2361450 00:13:05.203 10:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:05.203 10:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:05.203 10:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2361450' 00:13:05.203 killing process with pid 2361450 00:13:05.203 10:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2361450 00:13:05.203 [2024-07-25 10:28:08.894927] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:05.203 10:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2361450 00:13:05.460 [2024-07-25 10:28:08.923326] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.718 10:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.QUVJaS5Qvt 00:13:05.719 10:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:05.719 10:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:05.719 10:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:13:05.719 10:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:13:05.719 10:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:05.719 10:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:05.719 10:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:13:05.719 00:13:05.719 real 0m6.583s 00:13:05.719 user 0m10.824s 00:13:05.719 sys 0m0.993s 00:13:05.719 10:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.719 10:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.719 ************************************ 00:13:05.719 END TEST raid_read_error_test 00:13:05.719 ************************************ 00:13:05.719 10:28:09 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:13:05.719 10:28:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:05.719 10:28:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.719 10:28:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.719 ************************************ 00:13:05.719 START TEST raid_write_error_test 00:13:05.719 ************************************ 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.xDv0GJUDAY 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2362350 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2362350 /var/tmp/spdk-raid.sock 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2362350 ']' 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:05.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:05.719 10:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.719 [2024-07-25 10:28:09.331730] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:13:05.719 [2024-07-25 10:28:09.331810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362350 ] 00:13:05.719 [2024-07-25 10:28:09.413991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.978 [2024-07-25 10:28:09.536451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.978 [2024-07-25 10:28:09.610358] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.978 [2024-07-25 10:28:09.610407] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.911 10:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.911 10:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:06.911 10:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:06.911 10:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:06.911 BaseBdev1_malloc 00:13:06.911 10:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:07.169 true 00:13:07.169 10:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:07.427 [2024-07-25 10:28:11.089262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:07.427 [2024-07-25 10:28:11.089317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.427 [2024-07-25 10:28:11.089339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16df250 00:13:07.427 [2024-07-25 10:28:11.089355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.427 [2024-07-25 10:28:11.090960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.427 [2024-07-25 10:28:11.090989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:07.427 BaseBdev1 00:13:07.427 10:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:07.427 10:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:07.685 BaseBdev2_malloc 00:13:07.685 10:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:07.943 true 00:13:07.943 10:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:08.201 [2024-07-25 10:28:11.895064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:08.201 [2024-07-25 10:28:11.895135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.201 [2024-07-25 10:28:11.895162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16ce650 00:13:08.201 [2024-07-25 10:28:11.895178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.201 [2024-07-25 10:28:11.896811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.201 [2024-07-25 10:28:11.896840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:08.201 BaseBdev2 00:13:08.459 10:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:08.459 10:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:08.716 BaseBdev3_malloc 00:13:08.716 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:08.974 true 00:13:08.974 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:08.974 [2024-07-25 10:28:12.676172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:08.974 [2024-07-25 10:28:12.676226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.974 [2024-07-25 10:28:12.676251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16c45d0 00:13:08.974 [2024-07-25 10:28:12.676266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.974 [2024-07-25 10:28:12.677762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.974 [2024-07-25 10:28:12.677790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:08.974 BaseBdev3 00:13:09.232 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:09.490 [2024-07-25 10:28:12.969016] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.490 [2024-07-25 10:28:12.970491] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.490 [2024-07-25 10:28:12.970572] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.490 [2024-07-25 10:28:12.970828] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x15236b0 00:13:09.490 [2024-07-25 10:28:12.970846] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:09.490 [2024-07-25 10:28:12.971076] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1523b00 00:13:09.490 [2024-07-25 10:28:12.971289] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15236b0 00:13:09.490 [2024-07-25 10:28:12.971305] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x15236b0 00:13:09.490 [2024-07-25 10:28:12.971457] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.490 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:09.491 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:09.491 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:09.491 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:09.491 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:09.491 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:09.491 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:09.491 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:09.491 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:09.491 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:09.491 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.491 10:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.760 10:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:09.760 "name": "raid_bdev1", 00:13:09.760 "uuid": "fb2bfb33-d0d2-4bbe-86da-eadd8a3232c0", 00:13:09.760 "strip_size_kb": 64, 00:13:09.760 "state": "online", 00:13:09.760 "raid_level": "raid0", 00:13:09.760 "superblock": true, 00:13:09.760 "num_base_bdevs": 3, 00:13:09.760 "num_base_bdevs_discovered": 3, 00:13:09.760 "num_base_bdevs_operational": 3, 00:13:09.760 "base_bdevs_list": [ 00:13:09.760 { 00:13:09.760 "name": "BaseBdev1", 00:13:09.760 "uuid": "ab540ffd-d7b5-51d4-affa-1cc36f5ed0d7", 00:13:09.760 "is_configured": true, 00:13:09.760 "data_offset": 2048, 00:13:09.760 "data_size": 63488 00:13:09.760 }, 00:13:09.760 { 00:13:09.760 "name": "BaseBdev2", 00:13:09.760 "uuid": "b283e7f9-4204-5667-b700-a3e8b230e421", 00:13:09.760 "is_configured": true, 00:13:09.760 "data_offset": 2048, 00:13:09.761 "data_size": 63488 00:13:09.761 }, 00:13:09.761 { 00:13:09.761 "name": "BaseBdev3", 00:13:09.761 "uuid": "98f88c04-20e1-5450-b359-4f1cd2409ffd", 00:13:09.761 "is_configured": true, 00:13:09.761 "data_offset": 2048, 00:13:09.761 "data_size": 63488 00:13:09.761 } 00:13:09.761 ] 00:13:09.761 }' 00:13:09.761 10:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:09.761 10:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.327 10:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:10.327 10:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:10.327 [2024-07-25 10:28:13.923953] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x152cd30 00:13:11.259 10:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.517 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.775 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:11.775 "name": "raid_bdev1", 00:13:11.775 "uuid": "fb2bfb33-d0d2-4bbe-86da-eadd8a3232c0", 00:13:11.775 "strip_size_kb": 64, 00:13:11.775 "state": "online", 00:13:11.775 "raid_level": "raid0", 00:13:11.775 "superblock": true, 00:13:11.775 "num_base_bdevs": 3, 00:13:11.775 "num_base_bdevs_discovered": 3, 00:13:11.775 "num_base_bdevs_operational": 3, 00:13:11.775 "base_bdevs_list": [ 00:13:11.775 { 00:13:11.775 "name": "BaseBdev1", 00:13:11.775 "uuid": "ab540ffd-d7b5-51d4-affa-1cc36f5ed0d7", 00:13:11.775 "is_configured": true, 00:13:11.775 "data_offset": 2048, 00:13:11.775 "data_size": 63488 00:13:11.775 }, 00:13:11.775 { 00:13:11.775 "name": "BaseBdev2", 00:13:11.775 "uuid": "b283e7f9-4204-5667-b700-a3e8b230e421", 00:13:11.775 "is_configured": true, 00:13:11.775 "data_offset": 2048, 00:13:11.775 "data_size": 63488 00:13:11.775 }, 00:13:11.775 { 00:13:11.775 "name": "BaseBdev3", 00:13:11.775 "uuid": "98f88c04-20e1-5450-b359-4f1cd2409ffd", 00:13:11.775 "is_configured": true, 00:13:11.775 "data_offset": 2048, 00:13:11.775 "data_size": 63488 00:13:11.775 } 00:13:11.775 ] 00:13:11.775 }' 00:13:11.775 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:11.775 10:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.340 10:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:12.597 [2024-07-25 10:28:16.190899] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:12.597 [2024-07-25 10:28:16.190956] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.597 [2024-07-25 10:28:16.193943] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.597 [2024-07-25 10:28:16.193985] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.597 [2024-07-25 10:28:16.194024] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.597 [2024-07-25 10:28:16.194038] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15236b0 name raid_bdev1, state offline 00:13:12.597 0 00:13:12.597 10:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2362350 00:13:12.597 10:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2362350 ']' 00:13:12.597 10:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2362350 00:13:12.597 10:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:12.597 10:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:12.597 10:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2362350 00:13:12.597 10:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:12.597 10:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:12.597 10:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2362350' 00:13:12.597 killing process with pid 2362350 00:13:12.597 10:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2362350 00:13:12.597 [2024-07-25 10:28:16.244136] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.597 10:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2362350 00:13:12.597 [2024-07-25 10:28:16.274657] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:13.164 10:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.xDv0GJUDAY 00:13:13.164 10:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:13.164 10:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:13.164 10:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:13:13.164 10:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:13:13.164 10:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:13.164 10:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:13.164 10:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:13:13.164 00:13:13.164 real 0m7.311s 00:13:13.164 user 0m11.814s 00:13:13.164 sys 0m1.009s 00:13:13.164 10:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.164 10:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.164 ************************************ 00:13:13.164 END TEST raid_write_error_test 00:13:13.164 ************************************ 00:13:13.164 10:28:16 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:13.164 10:28:16 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:13:13.165 10:28:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:13.165 10:28:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.165 10:28:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:13.165 ************************************ 00:13:13.165 START TEST raid_state_function_test 00:13:13.165 ************************************ 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2363249 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2363249' 00:13:13.165 Process raid pid: 2363249 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2363249 /var/tmp/spdk-raid.sock 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2363249 ']' 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:13.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:13.165 10:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.165 [2024-07-25 10:28:16.688632] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:13:13.165 [2024-07-25 10:28:16.688718] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.165 [2024-07-25 10:28:16.774421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.423 [2024-07-25 10:28:16.901477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.423 [2024-07-25 10:28:16.975168] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.423 [2024-07-25 10:28:16.975209] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.356 10:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:14.356 10:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:14.356 10:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:14.356 [2024-07-25 10:28:17.995330] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.356 [2024-07-25 10:28:17.995380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.356 [2024-07-25 10:28:17.995392] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.356 [2024-07-25 10:28:17.995406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.356 [2024-07-25 10:28:17.995415] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.356 [2024-07-25 10:28:17.995427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.356 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:14.356 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:14.356 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:14.356 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:14.356 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:14.356 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:14.356 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:14.356 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:14.356 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:14.356 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:14.356 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.356 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.614 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:14.614 "name": "Existed_Raid", 00:13:14.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.614 "strip_size_kb": 64, 00:13:14.614 "state": "configuring", 00:13:14.614 "raid_level": "concat", 00:13:14.614 "superblock": false, 00:13:14.614 "num_base_bdevs": 3, 00:13:14.614 "num_base_bdevs_discovered": 0, 00:13:14.614 "num_base_bdevs_operational": 3, 00:13:14.614 "base_bdevs_list": [ 00:13:14.614 { 00:13:14.614 "name": "BaseBdev1", 00:13:14.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.614 "is_configured": false, 00:13:14.614 "data_offset": 0, 00:13:14.614 "data_size": 0 00:13:14.614 }, 00:13:14.614 { 00:13:14.615 "name": "BaseBdev2", 00:13:14.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.615 "is_configured": false, 00:13:14.615 "data_offset": 0, 00:13:14.615 "data_size": 0 00:13:14.615 }, 00:13:14.615 { 00:13:14.615 "name": "BaseBdev3", 00:13:14.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.615 "is_configured": false, 00:13:14.615 "data_offset": 0, 00:13:14.615 "data_size": 0 00:13:14.615 } 00:13:14.615 ] 00:13:14.615 }' 00:13:14.615 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:14.615 10:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.231 10:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:15.490 [2024-07-25 10:28:19.033941] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:15.490 [2024-07-25 10:28:19.033982] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23f3620 name Existed_Raid, state configuring 00:13:15.490 10:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:15.748 [2024-07-25 10:28:19.330749] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:15.748 [2024-07-25 10:28:19.330796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:15.748 [2024-07-25 10:28:19.330808] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:15.748 [2024-07-25 10:28:19.330821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:15.748 [2024-07-25 10:28:19.330831] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:15.748 [2024-07-25 10:28:19.330843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:15.748 10:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:16.006 [2024-07-25 10:28:19.632017] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.006 BaseBdev1 00:13:16.006 10:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:16.006 10:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:16.006 10:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:16.006 10:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:16.006 10:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:16.006 10:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:16.006 10:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:16.265 10:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:16.523 [ 00:13:16.523 { 00:13:16.523 "name": "BaseBdev1", 00:13:16.523 "aliases": [ 00:13:16.523 "96bae431-a3d9-4a4d-8a5e-ba1d4643959a" 00:13:16.523 ], 00:13:16.523 "product_name": "Malloc disk", 00:13:16.523 "block_size": 512, 00:13:16.523 "num_blocks": 65536, 00:13:16.523 "uuid": "96bae431-a3d9-4a4d-8a5e-ba1d4643959a", 00:13:16.523 "assigned_rate_limits": { 00:13:16.523 "rw_ios_per_sec": 0, 00:13:16.523 "rw_mbytes_per_sec": 0, 00:13:16.523 "r_mbytes_per_sec": 0, 00:13:16.523 "w_mbytes_per_sec": 0 00:13:16.523 }, 00:13:16.523 "claimed": true, 00:13:16.523 "claim_type": "exclusive_write", 00:13:16.523 "zoned": false, 00:13:16.523 "supported_io_types": { 00:13:16.523 "read": true, 00:13:16.523 "write": true, 00:13:16.524 "unmap": true, 00:13:16.524 "flush": true, 00:13:16.524 "reset": true, 00:13:16.524 "nvme_admin": false, 00:13:16.524 "nvme_io": false, 00:13:16.524 "nvme_io_md": false, 00:13:16.524 "write_zeroes": true, 00:13:16.524 "zcopy": true, 00:13:16.524 "get_zone_info": false, 00:13:16.524 "zone_management": false, 00:13:16.524 "zone_append": false, 00:13:16.524 "compare": false, 00:13:16.524 "compare_and_write": false, 00:13:16.524 "abort": true, 00:13:16.524 "seek_hole": false, 00:13:16.524 "seek_data": false, 00:13:16.524 "copy": true, 00:13:16.524 "nvme_iov_md": false 00:13:16.524 }, 00:13:16.524 "memory_domains": [ 00:13:16.524 { 00:13:16.524 "dma_device_id": "system", 00:13:16.524 "dma_device_type": 1 00:13:16.524 }, 00:13:16.524 { 00:13:16.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.524 "dma_device_type": 2 00:13:16.524 } 00:13:16.524 ], 00:13:16.524 "driver_specific": {} 00:13:16.524 } 00:13:16.524 ] 00:13:16.524 10:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:16.524 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:16.524 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:16.524 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:16.524 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:16.524 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:16.524 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:16.524 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:16.524 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:16.524 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:16.524 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:16.524 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.524 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.782 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:16.782 "name": "Existed_Raid", 00:13:16.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.782 "strip_size_kb": 64, 00:13:16.782 "state": "configuring", 00:13:16.782 "raid_level": "concat", 00:13:16.782 "superblock": false, 00:13:16.782 "num_base_bdevs": 3, 00:13:16.782 "num_base_bdevs_discovered": 1, 00:13:16.782 "num_base_bdevs_operational": 3, 00:13:16.782 "base_bdevs_list": [ 00:13:16.782 { 00:13:16.782 "name": "BaseBdev1", 00:13:16.782 "uuid": "96bae431-a3d9-4a4d-8a5e-ba1d4643959a", 00:13:16.782 "is_configured": true, 00:13:16.782 "data_offset": 0, 00:13:16.782 "data_size": 65536 00:13:16.782 }, 00:13:16.782 { 00:13:16.782 "name": "BaseBdev2", 00:13:16.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.782 "is_configured": false, 00:13:16.782 "data_offset": 0, 00:13:16.782 "data_size": 0 00:13:16.782 }, 00:13:16.782 { 00:13:16.782 "name": "BaseBdev3", 00:13:16.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.782 "is_configured": false, 00:13:16.782 "data_offset": 0, 00:13:16.782 "data_size": 0 00:13:16.782 } 00:13:16.782 ] 00:13:16.782 }' 00:13:16.782 10:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:16.782 10:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.349 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:17.607 [2024-07-25 10:28:21.224201] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:17.607 [2024-07-25 10:28:21.224253] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23f2e50 name Existed_Raid, state configuring 00:13:17.607 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:17.866 [2024-07-25 10:28:21.480913] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.866 [2024-07-25 10:28:21.482441] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:17.866 [2024-07-25 10:28:21.482487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:17.866 [2024-07-25 10:28:21.482499] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:17.866 [2024-07-25 10:28:21.482512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.866 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.124 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.124 "name": "Existed_Raid", 00:13:18.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.124 "strip_size_kb": 64, 00:13:18.124 "state": "configuring", 00:13:18.124 "raid_level": "concat", 00:13:18.124 "superblock": false, 00:13:18.124 "num_base_bdevs": 3, 00:13:18.124 "num_base_bdevs_discovered": 1, 00:13:18.124 "num_base_bdevs_operational": 3, 00:13:18.124 "base_bdevs_list": [ 00:13:18.124 { 00:13:18.124 "name": "BaseBdev1", 00:13:18.124 "uuid": "96bae431-a3d9-4a4d-8a5e-ba1d4643959a", 00:13:18.124 "is_configured": true, 00:13:18.124 "data_offset": 0, 00:13:18.124 "data_size": 65536 00:13:18.124 }, 00:13:18.124 { 00:13:18.124 "name": "BaseBdev2", 00:13:18.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.124 "is_configured": false, 00:13:18.124 "data_offset": 0, 00:13:18.124 "data_size": 0 00:13:18.124 }, 00:13:18.124 { 00:13:18.124 "name": "BaseBdev3", 00:13:18.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.124 "is_configured": false, 00:13:18.124 "data_offset": 0, 00:13:18.124 "data_size": 0 00:13:18.124 } 00:13:18.124 ] 00:13:18.124 }' 00:13:18.124 10:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.124 10:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.689 10:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:18.947 [2024-07-25 10:28:22.529545] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:18.947 BaseBdev2 00:13:18.947 10:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:18.947 10:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:18.947 10:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:18.947 10:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:18.947 10:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:18.947 10:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:18.947 10:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:19.204 10:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:19.463 [ 00:13:19.463 { 00:13:19.463 "name": "BaseBdev2", 00:13:19.463 "aliases": [ 00:13:19.463 "15ed6815-615c-4bae-b850-59477abeccec" 00:13:19.463 ], 00:13:19.463 "product_name": "Malloc disk", 00:13:19.463 "block_size": 512, 00:13:19.463 "num_blocks": 65536, 00:13:19.463 "uuid": "15ed6815-615c-4bae-b850-59477abeccec", 00:13:19.463 "assigned_rate_limits": { 00:13:19.463 "rw_ios_per_sec": 0, 00:13:19.463 "rw_mbytes_per_sec": 0, 00:13:19.463 "r_mbytes_per_sec": 0, 00:13:19.463 "w_mbytes_per_sec": 0 00:13:19.463 }, 00:13:19.463 "claimed": true, 00:13:19.463 "claim_type": "exclusive_write", 00:13:19.463 "zoned": false, 00:13:19.463 "supported_io_types": { 00:13:19.463 "read": true, 00:13:19.463 "write": true, 00:13:19.463 "unmap": true, 00:13:19.463 "flush": true, 00:13:19.463 "reset": true, 00:13:19.463 "nvme_admin": false, 00:13:19.463 "nvme_io": false, 00:13:19.463 "nvme_io_md": false, 00:13:19.463 "write_zeroes": true, 00:13:19.463 "zcopy": true, 00:13:19.463 "get_zone_info": false, 00:13:19.463 "zone_management": false, 00:13:19.463 "zone_append": false, 00:13:19.463 "compare": false, 00:13:19.463 "compare_and_write": false, 00:13:19.463 "abort": true, 00:13:19.463 "seek_hole": false, 00:13:19.463 "seek_data": false, 00:13:19.463 "copy": true, 00:13:19.463 "nvme_iov_md": false 00:13:19.463 }, 00:13:19.463 "memory_domains": [ 00:13:19.463 { 00:13:19.463 "dma_device_id": "system", 00:13:19.463 "dma_device_type": 1 00:13:19.463 }, 00:13:19.463 { 00:13:19.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.463 "dma_device_type": 2 00:13:19.463 } 00:13:19.463 ], 00:13:19.463 "driver_specific": {} 00:13:19.463 } 00:13:19.463 ] 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.463 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.721 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:19.721 "name": "Existed_Raid", 00:13:19.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.721 "strip_size_kb": 64, 00:13:19.721 "state": "configuring", 00:13:19.721 "raid_level": "concat", 00:13:19.721 "superblock": false, 00:13:19.721 "num_base_bdevs": 3, 00:13:19.721 "num_base_bdevs_discovered": 2, 00:13:19.721 "num_base_bdevs_operational": 3, 00:13:19.721 "base_bdevs_list": [ 00:13:19.721 { 00:13:19.721 "name": "BaseBdev1", 00:13:19.721 "uuid": "96bae431-a3d9-4a4d-8a5e-ba1d4643959a", 00:13:19.721 "is_configured": true, 00:13:19.721 "data_offset": 0, 00:13:19.721 "data_size": 65536 00:13:19.721 }, 00:13:19.721 { 00:13:19.721 "name": "BaseBdev2", 00:13:19.721 "uuid": "15ed6815-615c-4bae-b850-59477abeccec", 00:13:19.721 "is_configured": true, 00:13:19.721 "data_offset": 0, 00:13:19.721 "data_size": 65536 00:13:19.721 }, 00:13:19.721 { 00:13:19.721 "name": "BaseBdev3", 00:13:19.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.721 "is_configured": false, 00:13:19.721 "data_offset": 0, 00:13:19.721 "data_size": 0 00:13:19.721 } 00:13:19.721 ] 00:13:19.721 }' 00:13:19.721 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:19.721 10:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.285 10:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:20.542 [2024-07-25 10:28:24.063410] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.542 [2024-07-25 10:28:24.063476] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x23f3d90 00:13:20.542 [2024-07-25 10:28:24.063487] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:20.542 [2024-07-25 10:28:24.063738] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23f7a90 00:13:20.542 [2024-07-25 10:28:24.063893] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23f3d90 00:13:20.542 [2024-07-25 10:28:24.063910] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x23f3d90 00:13:20.542 [2024-07-25 10:28:24.064131] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.542 BaseBdev3 00:13:20.542 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:20.542 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:20.542 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:20.542 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:20.542 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:20.542 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:20.542 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:20.800 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:21.059 [ 00:13:21.059 { 00:13:21.059 "name": "BaseBdev3", 00:13:21.059 "aliases": [ 00:13:21.059 "8012b46a-b84f-4da2-9bee-bf0715b9f8ff" 00:13:21.059 ], 00:13:21.059 "product_name": "Malloc disk", 00:13:21.059 "block_size": 512, 00:13:21.059 "num_blocks": 65536, 00:13:21.059 "uuid": "8012b46a-b84f-4da2-9bee-bf0715b9f8ff", 00:13:21.059 "assigned_rate_limits": { 00:13:21.059 "rw_ios_per_sec": 0, 00:13:21.059 "rw_mbytes_per_sec": 0, 00:13:21.059 "r_mbytes_per_sec": 0, 00:13:21.059 "w_mbytes_per_sec": 0 00:13:21.059 }, 00:13:21.059 "claimed": true, 00:13:21.059 "claim_type": "exclusive_write", 00:13:21.059 "zoned": false, 00:13:21.059 "supported_io_types": { 00:13:21.059 "read": true, 00:13:21.059 "write": true, 00:13:21.059 "unmap": true, 00:13:21.059 "flush": true, 00:13:21.059 "reset": true, 00:13:21.059 "nvme_admin": false, 00:13:21.059 "nvme_io": false, 00:13:21.059 "nvme_io_md": false, 00:13:21.059 "write_zeroes": true, 00:13:21.059 "zcopy": true, 00:13:21.059 "get_zone_info": false, 00:13:21.059 "zone_management": false, 00:13:21.059 "zone_append": false, 00:13:21.059 "compare": false, 00:13:21.059 "compare_and_write": false, 00:13:21.059 "abort": true, 00:13:21.059 "seek_hole": false, 00:13:21.059 "seek_data": false, 00:13:21.059 "copy": true, 00:13:21.059 "nvme_iov_md": false 00:13:21.059 }, 00:13:21.059 "memory_domains": [ 00:13:21.059 { 00:13:21.059 "dma_device_id": "system", 00:13:21.059 "dma_device_type": 1 00:13:21.059 }, 00:13:21.059 { 00:13:21.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.059 "dma_device_type": 2 00:13:21.059 } 00:13:21.059 ], 00:13:21.059 "driver_specific": {} 00:13:21.059 } 00:13:21.059 ] 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.059 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.317 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:21.317 "name": "Existed_Raid", 00:13:21.318 "uuid": "a844d90a-bdeb-49bc-84c7-377246d6e4f8", 00:13:21.318 "strip_size_kb": 64, 00:13:21.318 "state": "online", 00:13:21.318 "raid_level": "concat", 00:13:21.318 "superblock": false, 00:13:21.318 "num_base_bdevs": 3, 00:13:21.318 "num_base_bdevs_discovered": 3, 00:13:21.318 "num_base_bdevs_operational": 3, 00:13:21.318 "base_bdevs_list": [ 00:13:21.318 { 00:13:21.318 "name": "BaseBdev1", 00:13:21.318 "uuid": "96bae431-a3d9-4a4d-8a5e-ba1d4643959a", 00:13:21.318 "is_configured": true, 00:13:21.318 "data_offset": 0, 00:13:21.318 "data_size": 65536 00:13:21.318 }, 00:13:21.318 { 00:13:21.318 "name": "BaseBdev2", 00:13:21.318 "uuid": "15ed6815-615c-4bae-b850-59477abeccec", 00:13:21.318 "is_configured": true, 00:13:21.318 "data_offset": 0, 00:13:21.318 "data_size": 65536 00:13:21.318 }, 00:13:21.318 { 00:13:21.318 "name": "BaseBdev3", 00:13:21.318 "uuid": "8012b46a-b84f-4da2-9bee-bf0715b9f8ff", 00:13:21.318 "is_configured": true, 00:13:21.318 "data_offset": 0, 00:13:21.318 "data_size": 65536 00:13:21.318 } 00:13:21.318 ] 00:13:21.318 }' 00:13:21.318 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:21.318 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.883 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:21.883 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:21.883 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:21.883 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:21.883 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:21.883 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:21.883 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:21.883 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:21.883 [2024-07-25 10:28:25.591713] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.141 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:22.141 "name": "Existed_Raid", 00:13:22.141 "aliases": [ 00:13:22.141 "a844d90a-bdeb-49bc-84c7-377246d6e4f8" 00:13:22.141 ], 00:13:22.141 "product_name": "Raid Volume", 00:13:22.141 "block_size": 512, 00:13:22.141 "num_blocks": 196608, 00:13:22.141 "uuid": "a844d90a-bdeb-49bc-84c7-377246d6e4f8", 00:13:22.141 "assigned_rate_limits": { 00:13:22.141 "rw_ios_per_sec": 0, 00:13:22.141 "rw_mbytes_per_sec": 0, 00:13:22.141 "r_mbytes_per_sec": 0, 00:13:22.141 "w_mbytes_per_sec": 0 00:13:22.141 }, 00:13:22.141 "claimed": false, 00:13:22.141 "zoned": false, 00:13:22.141 "supported_io_types": { 00:13:22.141 "read": true, 00:13:22.141 "write": true, 00:13:22.141 "unmap": true, 00:13:22.141 "flush": true, 00:13:22.141 "reset": true, 00:13:22.141 "nvme_admin": false, 00:13:22.141 "nvme_io": false, 00:13:22.141 "nvme_io_md": false, 00:13:22.141 "write_zeroes": true, 00:13:22.141 "zcopy": false, 00:13:22.141 "get_zone_info": false, 00:13:22.141 "zone_management": false, 00:13:22.141 "zone_append": false, 00:13:22.141 "compare": false, 00:13:22.141 "compare_and_write": false, 00:13:22.141 "abort": false, 00:13:22.141 "seek_hole": false, 00:13:22.141 "seek_data": false, 00:13:22.141 "copy": false, 00:13:22.141 "nvme_iov_md": false 00:13:22.141 }, 00:13:22.141 "memory_domains": [ 00:13:22.141 { 00:13:22.141 "dma_device_id": "system", 00:13:22.141 "dma_device_type": 1 00:13:22.141 }, 00:13:22.141 { 00:13:22.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.141 "dma_device_type": 2 00:13:22.141 }, 00:13:22.141 { 00:13:22.141 "dma_device_id": "system", 00:13:22.141 "dma_device_type": 1 00:13:22.141 }, 00:13:22.141 { 00:13:22.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.141 "dma_device_type": 2 00:13:22.141 }, 00:13:22.141 { 00:13:22.141 "dma_device_id": "system", 00:13:22.141 "dma_device_type": 1 00:13:22.141 }, 00:13:22.141 { 00:13:22.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.141 "dma_device_type": 2 00:13:22.141 } 00:13:22.141 ], 00:13:22.141 "driver_specific": { 00:13:22.141 "raid": { 00:13:22.141 "uuid": "a844d90a-bdeb-49bc-84c7-377246d6e4f8", 00:13:22.141 "strip_size_kb": 64, 00:13:22.141 "state": "online", 00:13:22.141 "raid_level": "concat", 00:13:22.141 "superblock": false, 00:13:22.141 "num_base_bdevs": 3, 00:13:22.141 "num_base_bdevs_discovered": 3, 00:13:22.142 "num_base_bdevs_operational": 3, 00:13:22.142 "base_bdevs_list": [ 00:13:22.142 { 00:13:22.142 "name": "BaseBdev1", 00:13:22.142 "uuid": "96bae431-a3d9-4a4d-8a5e-ba1d4643959a", 00:13:22.142 "is_configured": true, 00:13:22.142 "data_offset": 0, 00:13:22.142 "data_size": 65536 00:13:22.142 }, 00:13:22.142 { 00:13:22.142 "name": "BaseBdev2", 00:13:22.142 "uuid": "15ed6815-615c-4bae-b850-59477abeccec", 00:13:22.142 "is_configured": true, 00:13:22.142 "data_offset": 0, 00:13:22.142 "data_size": 65536 00:13:22.142 }, 00:13:22.142 { 00:13:22.142 "name": "BaseBdev3", 00:13:22.142 "uuid": "8012b46a-b84f-4da2-9bee-bf0715b9f8ff", 00:13:22.142 "is_configured": true, 00:13:22.142 "data_offset": 0, 00:13:22.142 "data_size": 65536 00:13:22.142 } 00:13:22.142 ] 00:13:22.142 } 00:13:22.142 } 00:13:22.142 }' 00:13:22.142 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:22.142 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:22.142 BaseBdev2 00:13:22.142 BaseBdev3' 00:13:22.142 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:22.142 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:22.142 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:22.400 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:22.400 "name": "BaseBdev1", 00:13:22.400 "aliases": [ 00:13:22.400 "96bae431-a3d9-4a4d-8a5e-ba1d4643959a" 00:13:22.400 ], 00:13:22.400 "product_name": "Malloc disk", 00:13:22.400 "block_size": 512, 00:13:22.400 "num_blocks": 65536, 00:13:22.400 "uuid": "96bae431-a3d9-4a4d-8a5e-ba1d4643959a", 00:13:22.400 "assigned_rate_limits": { 00:13:22.400 "rw_ios_per_sec": 0, 00:13:22.400 "rw_mbytes_per_sec": 0, 00:13:22.400 "r_mbytes_per_sec": 0, 00:13:22.400 "w_mbytes_per_sec": 0 00:13:22.400 }, 00:13:22.400 "claimed": true, 00:13:22.400 "claim_type": "exclusive_write", 00:13:22.400 "zoned": false, 00:13:22.400 "supported_io_types": { 00:13:22.400 "read": true, 00:13:22.400 "write": true, 00:13:22.400 "unmap": true, 00:13:22.400 "flush": true, 00:13:22.400 "reset": true, 00:13:22.400 "nvme_admin": false, 00:13:22.400 "nvme_io": false, 00:13:22.400 "nvme_io_md": false, 00:13:22.400 "write_zeroes": true, 00:13:22.400 "zcopy": true, 00:13:22.400 "get_zone_info": false, 00:13:22.400 "zone_management": false, 00:13:22.400 "zone_append": false, 00:13:22.400 "compare": false, 00:13:22.400 "compare_and_write": false, 00:13:22.400 "abort": true, 00:13:22.400 "seek_hole": false, 00:13:22.400 "seek_data": false, 00:13:22.400 "copy": true, 00:13:22.400 "nvme_iov_md": false 00:13:22.400 }, 00:13:22.400 "memory_domains": [ 00:13:22.400 { 00:13:22.400 "dma_device_id": "system", 00:13:22.400 "dma_device_type": 1 00:13:22.400 }, 00:13:22.400 { 00:13:22.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.400 "dma_device_type": 2 00:13:22.400 } 00:13:22.400 ], 00:13:22.400 "driver_specific": {} 00:13:22.400 }' 00:13:22.400 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:22.400 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:22.400 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:22.400 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.400 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.400 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:22.400 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:22.656 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:22.656 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:22.656 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.656 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.656 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:22.656 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:22.656 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:22.656 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:22.913 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:22.913 "name": "BaseBdev2", 00:13:22.913 "aliases": [ 00:13:22.913 "15ed6815-615c-4bae-b850-59477abeccec" 00:13:22.913 ], 00:13:22.913 "product_name": "Malloc disk", 00:13:22.913 "block_size": 512, 00:13:22.913 "num_blocks": 65536, 00:13:22.913 "uuid": "15ed6815-615c-4bae-b850-59477abeccec", 00:13:22.913 "assigned_rate_limits": { 00:13:22.913 "rw_ios_per_sec": 0, 00:13:22.913 "rw_mbytes_per_sec": 0, 00:13:22.913 "r_mbytes_per_sec": 0, 00:13:22.913 "w_mbytes_per_sec": 0 00:13:22.913 }, 00:13:22.913 "claimed": true, 00:13:22.913 "claim_type": "exclusive_write", 00:13:22.913 "zoned": false, 00:13:22.913 "supported_io_types": { 00:13:22.913 "read": true, 00:13:22.913 "write": true, 00:13:22.913 "unmap": true, 00:13:22.913 "flush": true, 00:13:22.913 "reset": true, 00:13:22.913 "nvme_admin": false, 00:13:22.913 "nvme_io": false, 00:13:22.913 "nvme_io_md": false, 00:13:22.913 "write_zeroes": true, 00:13:22.913 "zcopy": true, 00:13:22.913 "get_zone_info": false, 00:13:22.913 "zone_management": false, 00:13:22.913 "zone_append": false, 00:13:22.913 "compare": false, 00:13:22.913 "compare_and_write": false, 00:13:22.913 "abort": true, 00:13:22.913 "seek_hole": false, 00:13:22.913 "seek_data": false, 00:13:22.913 "copy": true, 00:13:22.913 "nvme_iov_md": false 00:13:22.913 }, 00:13:22.913 "memory_domains": [ 00:13:22.913 { 00:13:22.913 "dma_device_id": "system", 00:13:22.913 "dma_device_type": 1 00:13:22.913 }, 00:13:22.913 { 00:13:22.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.913 "dma_device_type": 2 00:13:22.913 } 00:13:22.913 ], 00:13:22.913 "driver_specific": {} 00:13:22.913 }' 00:13:22.913 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:22.913 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:22.913 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:22.913 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.913 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.913 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:22.913 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:23.170 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:23.170 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:23.170 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:23.170 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:23.170 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:23.170 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:23.170 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:23.170 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:23.428 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:23.428 "name": "BaseBdev3", 00:13:23.428 "aliases": [ 00:13:23.428 "8012b46a-b84f-4da2-9bee-bf0715b9f8ff" 00:13:23.428 ], 00:13:23.428 "product_name": "Malloc disk", 00:13:23.428 "block_size": 512, 00:13:23.428 "num_blocks": 65536, 00:13:23.428 "uuid": "8012b46a-b84f-4da2-9bee-bf0715b9f8ff", 00:13:23.428 "assigned_rate_limits": { 00:13:23.428 "rw_ios_per_sec": 0, 00:13:23.428 "rw_mbytes_per_sec": 0, 00:13:23.428 "r_mbytes_per_sec": 0, 00:13:23.428 "w_mbytes_per_sec": 0 00:13:23.428 }, 00:13:23.428 "claimed": true, 00:13:23.428 "claim_type": "exclusive_write", 00:13:23.428 "zoned": false, 00:13:23.428 "supported_io_types": { 00:13:23.428 "read": true, 00:13:23.428 "write": true, 00:13:23.428 "unmap": true, 00:13:23.428 "flush": true, 00:13:23.429 "reset": true, 00:13:23.429 "nvme_admin": false, 00:13:23.429 "nvme_io": false, 00:13:23.429 "nvme_io_md": false, 00:13:23.429 "write_zeroes": true, 00:13:23.429 "zcopy": true, 00:13:23.429 "get_zone_info": false, 00:13:23.429 "zone_management": false, 00:13:23.429 "zone_append": false, 00:13:23.429 "compare": false, 00:13:23.429 "compare_and_write": false, 00:13:23.429 "abort": true, 00:13:23.429 "seek_hole": false, 00:13:23.429 "seek_data": false, 00:13:23.429 "copy": true, 00:13:23.429 "nvme_iov_md": false 00:13:23.429 }, 00:13:23.429 "memory_domains": [ 00:13:23.429 { 00:13:23.429 "dma_device_id": "system", 00:13:23.429 "dma_device_type": 1 00:13:23.429 }, 00:13:23.429 { 00:13:23.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.429 "dma_device_type": 2 00:13:23.429 } 00:13:23.429 ], 00:13:23.429 "driver_specific": {} 00:13:23.429 }' 00:13:23.429 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:23.429 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:23.429 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:23.429 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:23.429 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:23.686 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:23.686 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:23.686 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:23.686 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:23.686 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:23.686 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:23.686 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:23.686 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:23.944 [2024-07-25 10:28:27.564770] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.944 [2024-07-25 10:28:27.564798] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.944 [2024-07-25 10:28:27.564842] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.944 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.202 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:24.202 "name": "Existed_Raid", 00:13:24.202 "uuid": "a844d90a-bdeb-49bc-84c7-377246d6e4f8", 00:13:24.202 "strip_size_kb": 64, 00:13:24.202 "state": "offline", 00:13:24.202 "raid_level": "concat", 00:13:24.202 "superblock": false, 00:13:24.202 "num_base_bdevs": 3, 00:13:24.202 "num_base_bdevs_discovered": 2, 00:13:24.202 "num_base_bdevs_operational": 2, 00:13:24.202 "base_bdevs_list": [ 00:13:24.202 { 00:13:24.202 "name": null, 00:13:24.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.202 "is_configured": false, 00:13:24.202 "data_offset": 0, 00:13:24.202 "data_size": 65536 00:13:24.202 }, 00:13:24.202 { 00:13:24.202 "name": "BaseBdev2", 00:13:24.202 "uuid": "15ed6815-615c-4bae-b850-59477abeccec", 00:13:24.202 "is_configured": true, 00:13:24.202 "data_offset": 0, 00:13:24.202 "data_size": 65536 00:13:24.202 }, 00:13:24.202 { 00:13:24.202 "name": "BaseBdev3", 00:13:24.202 "uuid": "8012b46a-b84f-4da2-9bee-bf0715b9f8ff", 00:13:24.202 "is_configured": true, 00:13:24.202 "data_offset": 0, 00:13:24.202 "data_size": 65536 00:13:24.202 } 00:13:24.202 ] 00:13:24.202 }' 00:13:24.202 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:24.202 10:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.767 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:24.767 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:24.767 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.767 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:25.024 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:25.024 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.024 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:25.282 [2024-07-25 10:28:28.990364] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.539 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:25.539 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:25.539 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.539 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:25.797 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:25.797 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.797 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:26.055 [2024-07-25 10:28:29.525017] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:26.056 [2024-07-25 10:28:29.525078] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23f3d90 name Existed_Raid, state offline 00:13:26.056 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:26.056 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:26.056 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.056 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:26.314 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:26.314 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:26.314 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:26.314 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:26.314 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:26.314 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:26.571 BaseBdev2 00:13:26.571 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:26.571 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:26.571 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:26.571 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:26.571 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:26.571 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:26.571 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:26.829 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:27.088 [ 00:13:27.088 { 00:13:27.088 "name": "BaseBdev2", 00:13:27.088 "aliases": [ 00:13:27.088 "6ce9d2ee-d39d-49a9-9c48-80e5ae5fea6a" 00:13:27.088 ], 00:13:27.088 "product_name": "Malloc disk", 00:13:27.088 "block_size": 512, 00:13:27.088 "num_blocks": 65536, 00:13:27.088 "uuid": "6ce9d2ee-d39d-49a9-9c48-80e5ae5fea6a", 00:13:27.088 "assigned_rate_limits": { 00:13:27.088 "rw_ios_per_sec": 0, 00:13:27.088 "rw_mbytes_per_sec": 0, 00:13:27.088 "r_mbytes_per_sec": 0, 00:13:27.088 "w_mbytes_per_sec": 0 00:13:27.088 }, 00:13:27.088 "claimed": false, 00:13:27.088 "zoned": false, 00:13:27.088 "supported_io_types": { 00:13:27.088 "read": true, 00:13:27.088 "write": true, 00:13:27.088 "unmap": true, 00:13:27.088 "flush": true, 00:13:27.088 "reset": true, 00:13:27.088 "nvme_admin": false, 00:13:27.088 "nvme_io": false, 00:13:27.088 "nvme_io_md": false, 00:13:27.088 "write_zeroes": true, 00:13:27.088 "zcopy": true, 00:13:27.088 "get_zone_info": false, 00:13:27.088 "zone_management": false, 00:13:27.088 "zone_append": false, 00:13:27.088 "compare": false, 00:13:27.088 "compare_and_write": false, 00:13:27.088 "abort": true, 00:13:27.088 "seek_hole": false, 00:13:27.088 "seek_data": false, 00:13:27.088 "copy": true, 00:13:27.088 "nvme_iov_md": false 00:13:27.088 }, 00:13:27.088 "memory_domains": [ 00:13:27.088 { 00:13:27.088 "dma_device_id": "system", 00:13:27.088 "dma_device_type": 1 00:13:27.088 }, 00:13:27.088 { 00:13:27.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.088 "dma_device_type": 2 00:13:27.088 } 00:13:27.088 ], 00:13:27.088 "driver_specific": {} 00:13:27.088 } 00:13:27.088 ] 00:13:27.088 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:27.088 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:27.088 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:27.088 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:27.346 BaseBdev3 00:13:27.346 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:27.346 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:27.346 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:27.346 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:27.346 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:27.346 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:27.346 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:27.604 10:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:27.861 [ 00:13:27.862 { 00:13:27.862 "name": "BaseBdev3", 00:13:27.862 "aliases": [ 00:13:27.862 "1a8309aa-faac-4abb-a4cb-8b170426ca21" 00:13:27.862 ], 00:13:27.862 "product_name": "Malloc disk", 00:13:27.862 "block_size": 512, 00:13:27.862 "num_blocks": 65536, 00:13:27.862 "uuid": "1a8309aa-faac-4abb-a4cb-8b170426ca21", 00:13:27.862 "assigned_rate_limits": { 00:13:27.862 "rw_ios_per_sec": 0, 00:13:27.862 "rw_mbytes_per_sec": 0, 00:13:27.862 "r_mbytes_per_sec": 0, 00:13:27.862 "w_mbytes_per_sec": 0 00:13:27.862 }, 00:13:27.862 "claimed": false, 00:13:27.862 "zoned": false, 00:13:27.862 "supported_io_types": { 00:13:27.862 "read": true, 00:13:27.862 "write": true, 00:13:27.862 "unmap": true, 00:13:27.862 "flush": true, 00:13:27.862 "reset": true, 00:13:27.862 "nvme_admin": false, 00:13:27.862 "nvme_io": false, 00:13:27.862 "nvme_io_md": false, 00:13:27.862 "write_zeroes": true, 00:13:27.862 "zcopy": true, 00:13:27.862 "get_zone_info": false, 00:13:27.862 "zone_management": false, 00:13:27.862 "zone_append": false, 00:13:27.862 "compare": false, 00:13:27.862 "compare_and_write": false, 00:13:27.862 "abort": true, 00:13:27.862 "seek_hole": false, 00:13:27.862 "seek_data": false, 00:13:27.862 "copy": true, 00:13:27.862 "nvme_iov_md": false 00:13:27.862 }, 00:13:27.862 "memory_domains": [ 00:13:27.862 { 00:13:27.862 "dma_device_id": "system", 00:13:27.862 "dma_device_type": 1 00:13:27.862 }, 00:13:27.862 { 00:13:27.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.862 "dma_device_type": 2 00:13:27.862 } 00:13:27.862 ], 00:13:27.862 "driver_specific": {} 00:13:27.862 } 00:13:27.862 ] 00:13:27.862 10:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:27.862 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:27.862 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:27.862 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:28.120 [2024-07-25 10:28:31.608258] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:28.120 [2024-07-25 10:28:31.608301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:28.120 [2024-07-25 10:28:31.608346] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.120 [2024-07-25 10:28:31.609709] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.120 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:28.120 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:28.120 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:28.120 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:28.120 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:28.120 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:28.120 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:28.120 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:28.120 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:28.120 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:28.120 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.120 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.377 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:28.377 "name": "Existed_Raid", 00:13:28.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.377 "strip_size_kb": 64, 00:13:28.377 "state": "configuring", 00:13:28.377 "raid_level": "concat", 00:13:28.377 "superblock": false, 00:13:28.377 "num_base_bdevs": 3, 00:13:28.377 "num_base_bdevs_discovered": 2, 00:13:28.377 "num_base_bdevs_operational": 3, 00:13:28.377 "base_bdevs_list": [ 00:13:28.377 { 00:13:28.377 "name": "BaseBdev1", 00:13:28.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.377 "is_configured": false, 00:13:28.377 "data_offset": 0, 00:13:28.377 "data_size": 0 00:13:28.377 }, 00:13:28.377 { 00:13:28.377 "name": "BaseBdev2", 00:13:28.377 "uuid": "6ce9d2ee-d39d-49a9-9c48-80e5ae5fea6a", 00:13:28.377 "is_configured": true, 00:13:28.377 "data_offset": 0, 00:13:28.377 "data_size": 65536 00:13:28.377 }, 00:13:28.378 { 00:13:28.378 "name": "BaseBdev3", 00:13:28.378 "uuid": "1a8309aa-faac-4abb-a4cb-8b170426ca21", 00:13:28.378 "is_configured": true, 00:13:28.378 "data_offset": 0, 00:13:28.378 "data_size": 65536 00:13:28.378 } 00:13:28.378 ] 00:13:28.378 }' 00:13:28.378 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:28.378 10:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.942 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:29.201 [2024-07-25 10:28:32.735190] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:29.201 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:29.201 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:29.201 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:29.201 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:29.201 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:29.201 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:29.201 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:29.201 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:29.201 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:29.201 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:29.201 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.201 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.459 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:29.459 "name": "Existed_Raid", 00:13:29.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.459 "strip_size_kb": 64, 00:13:29.459 "state": "configuring", 00:13:29.459 "raid_level": "concat", 00:13:29.459 "superblock": false, 00:13:29.459 "num_base_bdevs": 3, 00:13:29.459 "num_base_bdevs_discovered": 1, 00:13:29.459 "num_base_bdevs_operational": 3, 00:13:29.459 "base_bdevs_list": [ 00:13:29.459 { 00:13:29.459 "name": "BaseBdev1", 00:13:29.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.459 "is_configured": false, 00:13:29.459 "data_offset": 0, 00:13:29.459 "data_size": 0 00:13:29.459 }, 00:13:29.459 { 00:13:29.459 "name": null, 00:13:29.459 "uuid": "6ce9d2ee-d39d-49a9-9c48-80e5ae5fea6a", 00:13:29.459 "is_configured": false, 00:13:29.459 "data_offset": 0, 00:13:29.459 "data_size": 65536 00:13:29.459 }, 00:13:29.459 { 00:13:29.459 "name": "BaseBdev3", 00:13:29.459 "uuid": "1a8309aa-faac-4abb-a4cb-8b170426ca21", 00:13:29.459 "is_configured": true, 00:13:29.459 "data_offset": 0, 00:13:29.459 "data_size": 65536 00:13:29.459 } 00:13:29.459 ] 00:13:29.459 }' 00:13:29.459 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:29.459 10:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.023 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.024 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:30.281 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:30.281 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:30.539 [2024-07-25 10:28:34.138008] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.539 BaseBdev1 00:13:30.539 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:30.539 10:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:30.539 10:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:30.539 10:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:30.539 10:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:30.539 10:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:30.539 10:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:30.797 10:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:31.055 [ 00:13:31.055 { 00:13:31.055 "name": "BaseBdev1", 00:13:31.055 "aliases": [ 00:13:31.055 "53b29973-3c04-4346-9679-d6bc27f4d7b4" 00:13:31.055 ], 00:13:31.055 "product_name": "Malloc disk", 00:13:31.055 "block_size": 512, 00:13:31.055 "num_blocks": 65536, 00:13:31.055 "uuid": "53b29973-3c04-4346-9679-d6bc27f4d7b4", 00:13:31.055 "assigned_rate_limits": { 00:13:31.055 "rw_ios_per_sec": 0, 00:13:31.055 "rw_mbytes_per_sec": 0, 00:13:31.055 "r_mbytes_per_sec": 0, 00:13:31.055 "w_mbytes_per_sec": 0 00:13:31.055 }, 00:13:31.055 "claimed": true, 00:13:31.055 "claim_type": "exclusive_write", 00:13:31.055 "zoned": false, 00:13:31.055 "supported_io_types": { 00:13:31.055 "read": true, 00:13:31.055 "write": true, 00:13:31.055 "unmap": true, 00:13:31.055 "flush": true, 00:13:31.055 "reset": true, 00:13:31.055 "nvme_admin": false, 00:13:31.055 "nvme_io": false, 00:13:31.055 "nvme_io_md": false, 00:13:31.055 "write_zeroes": true, 00:13:31.055 "zcopy": true, 00:13:31.055 "get_zone_info": false, 00:13:31.055 "zone_management": false, 00:13:31.055 "zone_append": false, 00:13:31.055 "compare": false, 00:13:31.055 "compare_and_write": false, 00:13:31.055 "abort": true, 00:13:31.055 "seek_hole": false, 00:13:31.055 "seek_data": false, 00:13:31.055 "copy": true, 00:13:31.055 "nvme_iov_md": false 00:13:31.055 }, 00:13:31.055 "memory_domains": [ 00:13:31.055 { 00:13:31.055 "dma_device_id": "system", 00:13:31.055 "dma_device_type": 1 00:13:31.055 }, 00:13:31.055 { 00:13:31.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.055 "dma_device_type": 2 00:13:31.055 } 00:13:31.055 ], 00:13:31.055 "driver_specific": {} 00:13:31.055 } 00:13:31.055 ] 00:13:31.055 10:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:31.055 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:31.055 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:31.055 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:31.055 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:31.055 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:31.055 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:31.055 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:31.055 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:31.055 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:31.055 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:31.055 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.055 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.313 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:31.313 "name": "Existed_Raid", 00:13:31.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.313 "strip_size_kb": 64, 00:13:31.313 "state": "configuring", 00:13:31.313 "raid_level": "concat", 00:13:31.313 "superblock": false, 00:13:31.313 "num_base_bdevs": 3, 00:13:31.313 "num_base_bdevs_discovered": 2, 00:13:31.313 "num_base_bdevs_operational": 3, 00:13:31.313 "base_bdevs_list": [ 00:13:31.313 { 00:13:31.313 "name": "BaseBdev1", 00:13:31.313 "uuid": "53b29973-3c04-4346-9679-d6bc27f4d7b4", 00:13:31.313 "is_configured": true, 00:13:31.313 "data_offset": 0, 00:13:31.313 "data_size": 65536 00:13:31.313 }, 00:13:31.313 { 00:13:31.313 "name": null, 00:13:31.313 "uuid": "6ce9d2ee-d39d-49a9-9c48-80e5ae5fea6a", 00:13:31.313 "is_configured": false, 00:13:31.313 "data_offset": 0, 00:13:31.313 "data_size": 65536 00:13:31.313 }, 00:13:31.313 { 00:13:31.313 "name": "BaseBdev3", 00:13:31.313 "uuid": "1a8309aa-faac-4abb-a4cb-8b170426ca21", 00:13:31.313 "is_configured": true, 00:13:31.313 "data_offset": 0, 00:13:31.313 "data_size": 65536 00:13:31.313 } 00:13:31.313 ] 00:13:31.313 }' 00:13:31.313 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:31.313 10:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.879 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.879 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:32.136 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:32.136 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:32.394 [2024-07-25 10:28:36.043125] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:32.394 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:32.394 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:32.394 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:32.394 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:32.394 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:32.394 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:32.394 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:32.394 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:32.394 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:32.394 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:32.394 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.394 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.657 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:32.657 "name": "Existed_Raid", 00:13:32.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.657 "strip_size_kb": 64, 00:13:32.657 "state": "configuring", 00:13:32.657 "raid_level": "concat", 00:13:32.657 "superblock": false, 00:13:32.657 "num_base_bdevs": 3, 00:13:32.657 "num_base_bdevs_discovered": 1, 00:13:32.657 "num_base_bdevs_operational": 3, 00:13:32.657 "base_bdevs_list": [ 00:13:32.657 { 00:13:32.657 "name": "BaseBdev1", 00:13:32.657 "uuid": "53b29973-3c04-4346-9679-d6bc27f4d7b4", 00:13:32.657 "is_configured": true, 00:13:32.657 "data_offset": 0, 00:13:32.657 "data_size": 65536 00:13:32.657 }, 00:13:32.657 { 00:13:32.657 "name": null, 00:13:32.657 "uuid": "6ce9d2ee-d39d-49a9-9c48-80e5ae5fea6a", 00:13:32.657 "is_configured": false, 00:13:32.657 "data_offset": 0, 00:13:32.657 "data_size": 65536 00:13:32.657 }, 00:13:32.657 { 00:13:32.657 "name": null, 00:13:32.657 "uuid": "1a8309aa-faac-4abb-a4cb-8b170426ca21", 00:13:32.657 "is_configured": false, 00:13:32.657 "data_offset": 0, 00:13:32.657 "data_size": 65536 00:13:32.657 } 00:13:32.657 ] 00:13:32.657 }' 00:13:32.657 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:32.657 10:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.271 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.271 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:33.529 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:33.530 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:33.787 [2024-07-25 10:28:37.378721] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.787 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:33.787 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:33.787 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:33.787 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:33.787 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:33.787 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:33.787 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.787 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.787 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.787 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.787 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.787 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.045 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:34.045 "name": "Existed_Raid", 00:13:34.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.045 "strip_size_kb": 64, 00:13:34.045 "state": "configuring", 00:13:34.045 "raid_level": "concat", 00:13:34.045 "superblock": false, 00:13:34.045 "num_base_bdevs": 3, 00:13:34.045 "num_base_bdevs_discovered": 2, 00:13:34.045 "num_base_bdevs_operational": 3, 00:13:34.045 "base_bdevs_list": [ 00:13:34.045 { 00:13:34.045 "name": "BaseBdev1", 00:13:34.045 "uuid": "53b29973-3c04-4346-9679-d6bc27f4d7b4", 00:13:34.045 "is_configured": true, 00:13:34.045 "data_offset": 0, 00:13:34.045 "data_size": 65536 00:13:34.045 }, 00:13:34.045 { 00:13:34.045 "name": null, 00:13:34.045 "uuid": "6ce9d2ee-d39d-49a9-9c48-80e5ae5fea6a", 00:13:34.045 "is_configured": false, 00:13:34.045 "data_offset": 0, 00:13:34.045 "data_size": 65536 00:13:34.045 }, 00:13:34.045 { 00:13:34.045 "name": "BaseBdev3", 00:13:34.045 "uuid": "1a8309aa-faac-4abb-a4cb-8b170426ca21", 00:13:34.045 "is_configured": true, 00:13:34.045 "data_offset": 0, 00:13:34.045 "data_size": 65536 00:13:34.045 } 00:13:34.045 ] 00:13:34.045 }' 00:13:34.045 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:34.045 10:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.610 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.610 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:34.867 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:34.867 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:35.125 [2024-07-25 10:28:38.714263] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.125 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:35.125 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:35.125 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:35.125 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:35.125 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:35.125 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:35.125 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:35.125 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:35.125 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:35.125 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:35.125 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.125 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.385 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:35.385 "name": "Existed_Raid", 00:13:35.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.385 "strip_size_kb": 64, 00:13:35.385 "state": "configuring", 00:13:35.385 "raid_level": "concat", 00:13:35.385 "superblock": false, 00:13:35.385 "num_base_bdevs": 3, 00:13:35.385 "num_base_bdevs_discovered": 1, 00:13:35.385 "num_base_bdevs_operational": 3, 00:13:35.385 "base_bdevs_list": [ 00:13:35.385 { 00:13:35.385 "name": null, 00:13:35.385 "uuid": "53b29973-3c04-4346-9679-d6bc27f4d7b4", 00:13:35.385 "is_configured": false, 00:13:35.385 "data_offset": 0, 00:13:35.385 "data_size": 65536 00:13:35.385 }, 00:13:35.385 { 00:13:35.385 "name": null, 00:13:35.385 "uuid": "6ce9d2ee-d39d-49a9-9c48-80e5ae5fea6a", 00:13:35.385 "is_configured": false, 00:13:35.385 "data_offset": 0, 00:13:35.385 "data_size": 65536 00:13:35.385 }, 00:13:35.385 { 00:13:35.385 "name": "BaseBdev3", 00:13:35.385 "uuid": "1a8309aa-faac-4abb-a4cb-8b170426ca21", 00:13:35.385 "is_configured": true, 00:13:35.385 "data_offset": 0, 00:13:35.385 "data_size": 65536 00:13:35.385 } 00:13:35.385 ] 00:13:35.385 }' 00:13:35.385 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:35.385 10:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.950 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.950 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:36.208 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:36.208 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:36.466 [2024-07-25 10:28:40.147028] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.466 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:36.466 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:36.466 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:36.466 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:36.466 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:36.466 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:36.466 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:36.466 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:36.466 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:36.466 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:36.466 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.466 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.724 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:36.724 "name": "Existed_Raid", 00:13:36.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.724 "strip_size_kb": 64, 00:13:36.724 "state": "configuring", 00:13:36.724 "raid_level": "concat", 00:13:36.724 "superblock": false, 00:13:36.724 "num_base_bdevs": 3, 00:13:36.724 "num_base_bdevs_discovered": 2, 00:13:36.724 "num_base_bdevs_operational": 3, 00:13:36.724 "base_bdevs_list": [ 00:13:36.724 { 00:13:36.724 "name": null, 00:13:36.724 "uuid": "53b29973-3c04-4346-9679-d6bc27f4d7b4", 00:13:36.724 "is_configured": false, 00:13:36.724 "data_offset": 0, 00:13:36.724 "data_size": 65536 00:13:36.724 }, 00:13:36.724 { 00:13:36.724 "name": "BaseBdev2", 00:13:36.724 "uuid": "6ce9d2ee-d39d-49a9-9c48-80e5ae5fea6a", 00:13:36.724 "is_configured": true, 00:13:36.724 "data_offset": 0, 00:13:36.724 "data_size": 65536 00:13:36.724 }, 00:13:36.724 { 00:13:36.724 "name": "BaseBdev3", 00:13:36.724 "uuid": "1a8309aa-faac-4abb-a4cb-8b170426ca21", 00:13:36.724 "is_configured": true, 00:13:36.724 "data_offset": 0, 00:13:36.724 "data_size": 65536 00:13:36.724 } 00:13:36.724 ] 00:13:36.724 }' 00:13:36.724 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:36.724 10:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.290 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.290 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:37.548 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:37.548 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.548 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:37.806 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 53b29973-3c04-4346-9679-d6bc27f4d7b4 00:13:38.373 [2024-07-25 10:28:41.788574] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:38.373 [2024-07-25 10:28:41.788617] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x23f7a30 00:13:38.373 [2024-07-25 10:28:41.788625] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:38.373 [2024-07-25 10:28:41.788790] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23f2c40 00:13:38.373 [2024-07-25 10:28:41.788909] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23f7a30 00:13:38.373 [2024-07-25 10:28:41.788922] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x23f7a30 00:13:38.373 [2024-07-25 10:28:41.789151] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.373 NewBaseBdev 00:13:38.373 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:38.373 10:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:38.373 10:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:38.373 10:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:38.373 10:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:38.373 10:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:38.373 10:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:38.631 10:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:38.890 [ 00:13:38.890 { 00:13:38.890 "name": "NewBaseBdev", 00:13:38.890 "aliases": [ 00:13:38.890 "53b29973-3c04-4346-9679-d6bc27f4d7b4" 00:13:38.890 ], 00:13:38.890 "product_name": "Malloc disk", 00:13:38.890 "block_size": 512, 00:13:38.890 "num_blocks": 65536, 00:13:38.890 "uuid": "53b29973-3c04-4346-9679-d6bc27f4d7b4", 00:13:38.890 "assigned_rate_limits": { 00:13:38.890 "rw_ios_per_sec": 0, 00:13:38.890 "rw_mbytes_per_sec": 0, 00:13:38.890 "r_mbytes_per_sec": 0, 00:13:38.890 "w_mbytes_per_sec": 0 00:13:38.890 }, 00:13:38.890 "claimed": true, 00:13:38.890 "claim_type": "exclusive_write", 00:13:38.890 "zoned": false, 00:13:38.890 "supported_io_types": { 00:13:38.890 "read": true, 00:13:38.890 "write": true, 00:13:38.890 "unmap": true, 00:13:38.890 "flush": true, 00:13:38.890 "reset": true, 00:13:38.890 "nvme_admin": false, 00:13:38.890 "nvme_io": false, 00:13:38.890 "nvme_io_md": false, 00:13:38.890 "write_zeroes": true, 00:13:38.890 "zcopy": true, 00:13:38.890 "get_zone_info": false, 00:13:38.890 "zone_management": false, 00:13:38.890 "zone_append": false, 00:13:38.890 "compare": false, 00:13:38.890 "compare_and_write": false, 00:13:38.890 "abort": true, 00:13:38.890 "seek_hole": false, 00:13:38.890 "seek_data": false, 00:13:38.890 "copy": true, 00:13:38.890 "nvme_iov_md": false 00:13:38.890 }, 00:13:38.890 "memory_domains": [ 00:13:38.890 { 00:13:38.890 "dma_device_id": "system", 00:13:38.890 "dma_device_type": 1 00:13:38.890 }, 00:13:38.890 { 00:13:38.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.890 "dma_device_type": 2 00:13:38.890 } 00:13:38.890 ], 00:13:38.890 "driver_specific": {} 00:13:38.890 } 00:13:38.890 ] 00:13:38.890 10:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:38.890 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:38.890 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:38.890 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:38.890 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:38.890 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:38.890 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:38.890 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:38.890 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:38.890 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:38.890 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:38.890 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.890 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.149 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:39.149 "name": "Existed_Raid", 00:13:39.149 "uuid": "5de42a15-31cf-491d-b2e9-fdfaffaf05ea", 00:13:39.149 "strip_size_kb": 64, 00:13:39.149 "state": "online", 00:13:39.149 "raid_level": "concat", 00:13:39.149 "superblock": false, 00:13:39.149 "num_base_bdevs": 3, 00:13:39.149 "num_base_bdevs_discovered": 3, 00:13:39.149 "num_base_bdevs_operational": 3, 00:13:39.149 "base_bdevs_list": [ 00:13:39.149 { 00:13:39.149 "name": "NewBaseBdev", 00:13:39.149 "uuid": "53b29973-3c04-4346-9679-d6bc27f4d7b4", 00:13:39.149 "is_configured": true, 00:13:39.149 "data_offset": 0, 00:13:39.149 "data_size": 65536 00:13:39.149 }, 00:13:39.149 { 00:13:39.149 "name": "BaseBdev2", 00:13:39.149 "uuid": "6ce9d2ee-d39d-49a9-9c48-80e5ae5fea6a", 00:13:39.149 "is_configured": true, 00:13:39.149 "data_offset": 0, 00:13:39.149 "data_size": 65536 00:13:39.149 }, 00:13:39.149 { 00:13:39.149 "name": "BaseBdev3", 00:13:39.149 "uuid": "1a8309aa-faac-4abb-a4cb-8b170426ca21", 00:13:39.149 "is_configured": true, 00:13:39.149 "data_offset": 0, 00:13:39.149 "data_size": 65536 00:13:39.149 } 00:13:39.149 ] 00:13:39.149 }' 00:13:39.149 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:39.149 10:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.716 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:39.716 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:39.716 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:39.716 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:39.716 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:39.716 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:39.716 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:39.716 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:39.716 [2024-07-25 10:28:43.389064] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.716 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:39.716 "name": "Existed_Raid", 00:13:39.716 "aliases": [ 00:13:39.716 "5de42a15-31cf-491d-b2e9-fdfaffaf05ea" 00:13:39.716 ], 00:13:39.716 "product_name": "Raid Volume", 00:13:39.716 "block_size": 512, 00:13:39.716 "num_blocks": 196608, 00:13:39.716 "uuid": "5de42a15-31cf-491d-b2e9-fdfaffaf05ea", 00:13:39.716 "assigned_rate_limits": { 00:13:39.716 "rw_ios_per_sec": 0, 00:13:39.716 "rw_mbytes_per_sec": 0, 00:13:39.716 "r_mbytes_per_sec": 0, 00:13:39.716 "w_mbytes_per_sec": 0 00:13:39.716 }, 00:13:39.716 "claimed": false, 00:13:39.716 "zoned": false, 00:13:39.716 "supported_io_types": { 00:13:39.716 "read": true, 00:13:39.716 "write": true, 00:13:39.716 "unmap": true, 00:13:39.716 "flush": true, 00:13:39.716 "reset": true, 00:13:39.716 "nvme_admin": false, 00:13:39.716 "nvme_io": false, 00:13:39.716 "nvme_io_md": false, 00:13:39.716 "write_zeroes": true, 00:13:39.716 "zcopy": false, 00:13:39.716 "get_zone_info": false, 00:13:39.716 "zone_management": false, 00:13:39.716 "zone_append": false, 00:13:39.716 "compare": false, 00:13:39.716 "compare_and_write": false, 00:13:39.716 "abort": false, 00:13:39.716 "seek_hole": false, 00:13:39.716 "seek_data": false, 00:13:39.716 "copy": false, 00:13:39.716 "nvme_iov_md": false 00:13:39.716 }, 00:13:39.716 "memory_domains": [ 00:13:39.716 { 00:13:39.716 "dma_device_id": "system", 00:13:39.716 "dma_device_type": 1 00:13:39.716 }, 00:13:39.716 { 00:13:39.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.716 "dma_device_type": 2 00:13:39.716 }, 00:13:39.716 { 00:13:39.716 "dma_device_id": "system", 00:13:39.716 "dma_device_type": 1 00:13:39.716 }, 00:13:39.716 { 00:13:39.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.716 "dma_device_type": 2 00:13:39.716 }, 00:13:39.716 { 00:13:39.716 "dma_device_id": "system", 00:13:39.716 "dma_device_type": 1 00:13:39.716 }, 00:13:39.716 { 00:13:39.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.716 "dma_device_type": 2 00:13:39.716 } 00:13:39.716 ], 00:13:39.716 "driver_specific": { 00:13:39.716 "raid": { 00:13:39.716 "uuid": "5de42a15-31cf-491d-b2e9-fdfaffaf05ea", 00:13:39.716 "strip_size_kb": 64, 00:13:39.716 "state": "online", 00:13:39.716 "raid_level": "concat", 00:13:39.716 "superblock": false, 00:13:39.716 "num_base_bdevs": 3, 00:13:39.716 "num_base_bdevs_discovered": 3, 00:13:39.716 "num_base_bdevs_operational": 3, 00:13:39.716 "base_bdevs_list": [ 00:13:39.716 { 00:13:39.716 "name": "NewBaseBdev", 00:13:39.716 "uuid": "53b29973-3c04-4346-9679-d6bc27f4d7b4", 00:13:39.716 "is_configured": true, 00:13:39.716 "data_offset": 0, 00:13:39.716 "data_size": 65536 00:13:39.716 }, 00:13:39.716 { 00:13:39.716 "name": "BaseBdev2", 00:13:39.716 "uuid": "6ce9d2ee-d39d-49a9-9c48-80e5ae5fea6a", 00:13:39.716 "is_configured": true, 00:13:39.716 "data_offset": 0, 00:13:39.716 "data_size": 65536 00:13:39.716 }, 00:13:39.716 { 00:13:39.716 "name": "BaseBdev3", 00:13:39.716 "uuid": "1a8309aa-faac-4abb-a4cb-8b170426ca21", 00:13:39.716 "is_configured": true, 00:13:39.716 "data_offset": 0, 00:13:39.716 "data_size": 65536 00:13:39.716 } 00:13:39.716 ] 00:13:39.716 } 00:13:39.716 } 00:13:39.716 }' 00:13:39.716 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:39.973 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:39.973 BaseBdev2 00:13:39.973 BaseBdev3' 00:13:39.973 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:39.973 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:39.973 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:40.231 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:40.231 "name": "NewBaseBdev", 00:13:40.231 "aliases": [ 00:13:40.231 "53b29973-3c04-4346-9679-d6bc27f4d7b4" 00:13:40.231 ], 00:13:40.231 "product_name": "Malloc disk", 00:13:40.231 "block_size": 512, 00:13:40.231 "num_blocks": 65536, 00:13:40.231 "uuid": "53b29973-3c04-4346-9679-d6bc27f4d7b4", 00:13:40.231 "assigned_rate_limits": { 00:13:40.231 "rw_ios_per_sec": 0, 00:13:40.231 "rw_mbytes_per_sec": 0, 00:13:40.231 "r_mbytes_per_sec": 0, 00:13:40.231 "w_mbytes_per_sec": 0 00:13:40.231 }, 00:13:40.231 "claimed": true, 00:13:40.231 "claim_type": "exclusive_write", 00:13:40.231 "zoned": false, 00:13:40.231 "supported_io_types": { 00:13:40.231 "read": true, 00:13:40.231 "write": true, 00:13:40.231 "unmap": true, 00:13:40.231 "flush": true, 00:13:40.231 "reset": true, 00:13:40.231 "nvme_admin": false, 00:13:40.231 "nvme_io": false, 00:13:40.231 "nvme_io_md": false, 00:13:40.231 "write_zeroes": true, 00:13:40.231 "zcopy": true, 00:13:40.231 "get_zone_info": false, 00:13:40.231 "zone_management": false, 00:13:40.231 "zone_append": false, 00:13:40.231 "compare": false, 00:13:40.231 "compare_and_write": false, 00:13:40.231 "abort": true, 00:13:40.231 "seek_hole": false, 00:13:40.231 "seek_data": false, 00:13:40.231 "copy": true, 00:13:40.231 "nvme_iov_md": false 00:13:40.231 }, 00:13:40.231 "memory_domains": [ 00:13:40.231 { 00:13:40.231 "dma_device_id": "system", 00:13:40.231 "dma_device_type": 1 00:13:40.231 }, 00:13:40.231 { 00:13:40.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.231 "dma_device_type": 2 00:13:40.231 } 00:13:40.231 ], 00:13:40.231 "driver_specific": {} 00:13:40.231 }' 00:13:40.231 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.231 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.231 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:40.231 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.231 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.231 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:40.231 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.231 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.231 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:40.231 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.489 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.489 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:40.489 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:40.489 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:40.489 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:40.747 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:40.747 "name": "BaseBdev2", 00:13:40.747 "aliases": [ 00:13:40.747 "6ce9d2ee-d39d-49a9-9c48-80e5ae5fea6a" 00:13:40.747 ], 00:13:40.747 "product_name": "Malloc disk", 00:13:40.747 "block_size": 512, 00:13:40.747 "num_blocks": 65536, 00:13:40.747 "uuid": "6ce9d2ee-d39d-49a9-9c48-80e5ae5fea6a", 00:13:40.747 "assigned_rate_limits": { 00:13:40.747 "rw_ios_per_sec": 0, 00:13:40.747 "rw_mbytes_per_sec": 0, 00:13:40.747 "r_mbytes_per_sec": 0, 00:13:40.747 "w_mbytes_per_sec": 0 00:13:40.747 }, 00:13:40.747 "claimed": true, 00:13:40.747 "claim_type": "exclusive_write", 00:13:40.747 "zoned": false, 00:13:40.747 "supported_io_types": { 00:13:40.747 "read": true, 00:13:40.747 "write": true, 00:13:40.747 "unmap": true, 00:13:40.747 "flush": true, 00:13:40.747 "reset": true, 00:13:40.747 "nvme_admin": false, 00:13:40.747 "nvme_io": false, 00:13:40.747 "nvme_io_md": false, 00:13:40.747 "write_zeroes": true, 00:13:40.747 "zcopy": true, 00:13:40.747 "get_zone_info": false, 00:13:40.747 "zone_management": false, 00:13:40.747 "zone_append": false, 00:13:40.747 "compare": false, 00:13:40.747 "compare_and_write": false, 00:13:40.747 "abort": true, 00:13:40.747 "seek_hole": false, 00:13:40.747 "seek_data": false, 00:13:40.747 "copy": true, 00:13:40.747 "nvme_iov_md": false 00:13:40.747 }, 00:13:40.747 "memory_domains": [ 00:13:40.747 { 00:13:40.747 "dma_device_id": "system", 00:13:40.747 "dma_device_type": 1 00:13:40.747 }, 00:13:40.747 { 00:13:40.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.747 "dma_device_type": 2 00:13:40.747 } 00:13:40.747 ], 00:13:40.747 "driver_specific": {} 00:13:40.747 }' 00:13:40.747 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.747 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.747 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:40.747 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.747 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.747 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:40.748 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.748 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.005 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:41.005 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.005 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.005 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:41.005 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:41.005 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:41.005 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:41.263 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:41.263 "name": "BaseBdev3", 00:13:41.263 "aliases": [ 00:13:41.263 "1a8309aa-faac-4abb-a4cb-8b170426ca21" 00:13:41.263 ], 00:13:41.263 "product_name": "Malloc disk", 00:13:41.263 "block_size": 512, 00:13:41.263 "num_blocks": 65536, 00:13:41.263 "uuid": "1a8309aa-faac-4abb-a4cb-8b170426ca21", 00:13:41.263 "assigned_rate_limits": { 00:13:41.263 "rw_ios_per_sec": 0, 00:13:41.263 "rw_mbytes_per_sec": 0, 00:13:41.263 "r_mbytes_per_sec": 0, 00:13:41.263 "w_mbytes_per_sec": 0 00:13:41.263 }, 00:13:41.263 "claimed": true, 00:13:41.263 "claim_type": "exclusive_write", 00:13:41.263 "zoned": false, 00:13:41.263 "supported_io_types": { 00:13:41.263 "read": true, 00:13:41.263 "write": true, 00:13:41.263 "unmap": true, 00:13:41.263 "flush": true, 00:13:41.263 "reset": true, 00:13:41.263 "nvme_admin": false, 00:13:41.263 "nvme_io": false, 00:13:41.263 "nvme_io_md": false, 00:13:41.263 "write_zeroes": true, 00:13:41.263 "zcopy": true, 00:13:41.263 "get_zone_info": false, 00:13:41.263 "zone_management": false, 00:13:41.263 "zone_append": false, 00:13:41.263 "compare": false, 00:13:41.263 "compare_and_write": false, 00:13:41.263 "abort": true, 00:13:41.263 "seek_hole": false, 00:13:41.263 "seek_data": false, 00:13:41.263 "copy": true, 00:13:41.263 "nvme_iov_md": false 00:13:41.263 }, 00:13:41.263 "memory_domains": [ 00:13:41.263 { 00:13:41.263 "dma_device_id": "system", 00:13:41.263 "dma_device_type": 1 00:13:41.263 }, 00:13:41.263 { 00:13:41.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.263 "dma_device_type": 2 00:13:41.263 } 00:13:41.263 ], 00:13:41.263 "driver_specific": {} 00:13:41.263 }' 00:13:41.263 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.263 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.263 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:41.263 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.263 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.263 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:41.263 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.263 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.521 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:41.521 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.521 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.521 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:41.521 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:41.779 [2024-07-25 10:28:45.305878] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.779 [2024-07-25 10:28:45.305905] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.779 [2024-07-25 10:28:45.305970] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.779 [2024-07-25 10:28:45.306037] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.779 [2024-07-25 10:28:45.306052] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23f7a30 name Existed_Raid, state offline 00:13:41.779 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2363249 00:13:41.779 10:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2363249 ']' 00:13:41.779 10:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2363249 00:13:41.779 10:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:41.779 10:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.779 10:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2363249 00:13:41.779 10:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:41.779 10:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:41.779 10:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2363249' 00:13:41.779 killing process with pid 2363249 00:13:41.779 10:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2363249 00:13:41.779 [2024-07-25 10:28:45.357208] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.779 10:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2363249 00:13:41.779 [2024-07-25 10:28:45.394381] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:42.037 00:13:42.037 real 0m29.047s 00:13:42.037 user 0m54.225s 00:13:42.037 sys 0m3.873s 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.037 ************************************ 00:13:42.037 END TEST raid_state_function_test 00:13:42.037 ************************************ 00:13:42.037 10:28:45 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:13:42.037 10:28:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:42.037 10:28:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.037 10:28:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.037 ************************************ 00:13:42.037 START TEST raid_state_function_test_sb 00:13:42.037 ************************************ 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:42.037 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2367308 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2367308' 00:13:42.038 Process raid pid: 2367308 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2367308 /var/tmp/spdk-raid.sock 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2367308 ']' 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:42.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.038 10:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.296 [2024-07-25 10:28:45.784912] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:13:42.296 [2024-07-25 10:28:45.784996] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.296 [2024-07-25 10:28:45.866397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.296 [2024-07-25 10:28:45.981473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.554 [2024-07-25 10:28:46.058175] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.554 [2024-07-25 10:28:46.058208] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.118 10:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.118 10:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:43.118 10:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:43.375 [2024-07-25 10:28:46.973015] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.375 [2024-07-25 10:28:46.973055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.375 [2024-07-25 10:28:46.973080] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.375 [2024-07-25 10:28:46.973092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.375 [2024-07-25 10:28:46.973100] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:43.375 [2024-07-25 10:28:46.973118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:43.375 10:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:43.375 10:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:43.375 10:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:43.375 10:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:43.375 10:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:43.375 10:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:43.375 10:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:43.375 10:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:43.375 10:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:43.375 10:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:43.375 10:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.375 10:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.632 10:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:43.632 "name": "Existed_Raid", 00:13:43.632 "uuid": "d7d911b7-17c2-4539-8e74-0ce4793af174", 00:13:43.632 "strip_size_kb": 64, 00:13:43.632 "state": "configuring", 00:13:43.632 "raid_level": "concat", 00:13:43.632 "superblock": true, 00:13:43.632 "num_base_bdevs": 3, 00:13:43.632 "num_base_bdevs_discovered": 0, 00:13:43.632 "num_base_bdevs_operational": 3, 00:13:43.632 "base_bdevs_list": [ 00:13:43.632 { 00:13:43.632 "name": "BaseBdev1", 00:13:43.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.632 "is_configured": false, 00:13:43.632 "data_offset": 0, 00:13:43.632 "data_size": 0 00:13:43.632 }, 00:13:43.632 { 00:13:43.632 "name": "BaseBdev2", 00:13:43.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.632 "is_configured": false, 00:13:43.632 "data_offset": 0, 00:13:43.632 "data_size": 0 00:13:43.632 }, 00:13:43.632 { 00:13:43.632 "name": "BaseBdev3", 00:13:43.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.632 "is_configured": false, 00:13:43.632 "data_offset": 0, 00:13:43.632 "data_size": 0 00:13:43.633 } 00:13:43.633 ] 00:13:43.633 }' 00:13:43.633 10:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:43.633 10:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.196 10:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:44.454 [2024-07-25 10:28:48.007635] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.454 [2024-07-25 10:28:48.007668] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9cd620 name Existed_Raid, state configuring 00:13:44.454 10:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:44.711 [2024-07-25 10:28:48.252299] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.711 [2024-07-25 10:28:48.252336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.711 [2024-07-25 10:28:48.252348] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.711 [2024-07-25 10:28:48.252362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.711 [2024-07-25 10:28:48.252371] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.711 [2024-07-25 10:28:48.252383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.711 10:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:44.968 [2024-07-25 10:28:48.505533] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.969 BaseBdev1 00:13:44.969 10:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:44.969 10:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:44.969 10:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:44.969 10:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:44.969 10:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:44.969 10:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:44.969 10:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:45.226 10:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:45.483 [ 00:13:45.483 { 00:13:45.483 "name": "BaseBdev1", 00:13:45.483 "aliases": [ 00:13:45.483 "b6646d6d-8e1b-43c7-97b8-8b371e6fcc43" 00:13:45.483 ], 00:13:45.483 "product_name": "Malloc disk", 00:13:45.483 "block_size": 512, 00:13:45.483 "num_blocks": 65536, 00:13:45.483 "uuid": "b6646d6d-8e1b-43c7-97b8-8b371e6fcc43", 00:13:45.483 "assigned_rate_limits": { 00:13:45.483 "rw_ios_per_sec": 0, 00:13:45.483 "rw_mbytes_per_sec": 0, 00:13:45.483 "r_mbytes_per_sec": 0, 00:13:45.483 "w_mbytes_per_sec": 0 00:13:45.483 }, 00:13:45.483 "claimed": true, 00:13:45.483 "claim_type": "exclusive_write", 00:13:45.483 "zoned": false, 00:13:45.483 "supported_io_types": { 00:13:45.483 "read": true, 00:13:45.483 "write": true, 00:13:45.483 "unmap": true, 00:13:45.483 "flush": true, 00:13:45.483 "reset": true, 00:13:45.483 "nvme_admin": false, 00:13:45.483 "nvme_io": false, 00:13:45.483 "nvme_io_md": false, 00:13:45.483 "write_zeroes": true, 00:13:45.483 "zcopy": true, 00:13:45.483 "get_zone_info": false, 00:13:45.483 "zone_management": false, 00:13:45.483 "zone_append": false, 00:13:45.483 "compare": false, 00:13:45.483 "compare_and_write": false, 00:13:45.483 "abort": true, 00:13:45.483 "seek_hole": false, 00:13:45.483 "seek_data": false, 00:13:45.483 "copy": true, 00:13:45.483 "nvme_iov_md": false 00:13:45.483 }, 00:13:45.483 "memory_domains": [ 00:13:45.483 { 00:13:45.483 "dma_device_id": "system", 00:13:45.483 "dma_device_type": 1 00:13:45.483 }, 00:13:45.483 { 00:13:45.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.483 "dma_device_type": 2 00:13:45.483 } 00:13:45.483 ], 00:13:45.483 "driver_specific": {} 00:13:45.483 } 00:13:45.483 ] 00:13:45.483 10:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:45.483 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:45.483 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:45.483 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:45.483 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:45.483 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:45.483 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:45.483 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:45.483 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:45.483 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:45.483 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:45.483 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.483 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.741 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:45.741 "name": "Existed_Raid", 00:13:45.741 "uuid": "386b9fec-c7bd-48fe-a383-f4cef7dcd37f", 00:13:45.741 "strip_size_kb": 64, 00:13:45.741 "state": "configuring", 00:13:45.741 "raid_level": "concat", 00:13:45.741 "superblock": true, 00:13:45.741 "num_base_bdevs": 3, 00:13:45.741 "num_base_bdevs_discovered": 1, 00:13:45.741 "num_base_bdevs_operational": 3, 00:13:45.741 "base_bdevs_list": [ 00:13:45.741 { 00:13:45.741 "name": "BaseBdev1", 00:13:45.741 "uuid": "b6646d6d-8e1b-43c7-97b8-8b371e6fcc43", 00:13:45.741 "is_configured": true, 00:13:45.741 "data_offset": 2048, 00:13:45.741 "data_size": 63488 00:13:45.741 }, 00:13:45.741 { 00:13:45.741 "name": "BaseBdev2", 00:13:45.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.741 "is_configured": false, 00:13:45.741 "data_offset": 0, 00:13:45.741 "data_size": 0 00:13:45.741 }, 00:13:45.741 { 00:13:45.741 "name": "BaseBdev3", 00:13:45.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.741 "is_configured": false, 00:13:45.741 "data_offset": 0, 00:13:45.741 "data_size": 0 00:13:45.741 } 00:13:45.741 ] 00:13:45.741 }' 00:13:45.741 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:45.741 10:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.306 10:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:46.563 [2024-07-25 10:28:50.113776] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:46.563 [2024-07-25 10:28:50.113826] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9cce50 name Existed_Raid, state configuring 00:13:46.563 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:46.821 [2024-07-25 10:28:50.354451] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.821 [2024-07-25 10:28:50.355977] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.821 [2024-07-25 10:28:50.356014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.821 [2024-07-25 10:28:50.356027] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:46.821 [2024-07-25 10:28:50.356040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.821 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.078 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:47.078 "name": "Existed_Raid", 00:13:47.079 "uuid": "3f8b9c27-b946-49a2-b638-ef33acf70298", 00:13:47.079 "strip_size_kb": 64, 00:13:47.079 "state": "configuring", 00:13:47.079 "raid_level": "concat", 00:13:47.079 "superblock": true, 00:13:47.079 "num_base_bdevs": 3, 00:13:47.079 "num_base_bdevs_discovered": 1, 00:13:47.079 "num_base_bdevs_operational": 3, 00:13:47.079 "base_bdevs_list": [ 00:13:47.079 { 00:13:47.079 "name": "BaseBdev1", 00:13:47.079 "uuid": "b6646d6d-8e1b-43c7-97b8-8b371e6fcc43", 00:13:47.079 "is_configured": true, 00:13:47.079 "data_offset": 2048, 00:13:47.079 "data_size": 63488 00:13:47.079 }, 00:13:47.079 { 00:13:47.079 "name": "BaseBdev2", 00:13:47.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.079 "is_configured": false, 00:13:47.079 "data_offset": 0, 00:13:47.079 "data_size": 0 00:13:47.079 }, 00:13:47.079 { 00:13:47.079 "name": "BaseBdev3", 00:13:47.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.079 "is_configured": false, 00:13:47.079 "data_offset": 0, 00:13:47.079 "data_size": 0 00:13:47.079 } 00:13:47.079 ] 00:13:47.079 }' 00:13:47.079 10:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:47.079 10:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.644 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:47.902 [2024-07-25 10:28:51.399085] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.902 BaseBdev2 00:13:47.902 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:47.902 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:47.902 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:47.902 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:47.902 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:47.902 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:47.903 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:48.160 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:48.418 [ 00:13:48.418 { 00:13:48.418 "name": "BaseBdev2", 00:13:48.418 "aliases": [ 00:13:48.418 "e8cb282c-29cb-4880-8abd-c9ca8290e5bd" 00:13:48.418 ], 00:13:48.418 "product_name": "Malloc disk", 00:13:48.418 "block_size": 512, 00:13:48.419 "num_blocks": 65536, 00:13:48.419 "uuid": "e8cb282c-29cb-4880-8abd-c9ca8290e5bd", 00:13:48.419 "assigned_rate_limits": { 00:13:48.419 "rw_ios_per_sec": 0, 00:13:48.419 "rw_mbytes_per_sec": 0, 00:13:48.419 "r_mbytes_per_sec": 0, 00:13:48.419 "w_mbytes_per_sec": 0 00:13:48.419 }, 00:13:48.419 "claimed": true, 00:13:48.419 "claim_type": "exclusive_write", 00:13:48.419 "zoned": false, 00:13:48.419 "supported_io_types": { 00:13:48.419 "read": true, 00:13:48.419 "write": true, 00:13:48.419 "unmap": true, 00:13:48.419 "flush": true, 00:13:48.419 "reset": true, 00:13:48.419 "nvme_admin": false, 00:13:48.419 "nvme_io": false, 00:13:48.419 "nvme_io_md": false, 00:13:48.419 "write_zeroes": true, 00:13:48.419 "zcopy": true, 00:13:48.419 "get_zone_info": false, 00:13:48.419 "zone_management": false, 00:13:48.419 "zone_append": false, 00:13:48.419 "compare": false, 00:13:48.419 "compare_and_write": false, 00:13:48.419 "abort": true, 00:13:48.419 "seek_hole": false, 00:13:48.419 "seek_data": false, 00:13:48.419 "copy": true, 00:13:48.419 "nvme_iov_md": false 00:13:48.419 }, 00:13:48.419 "memory_domains": [ 00:13:48.419 { 00:13:48.419 "dma_device_id": "system", 00:13:48.419 "dma_device_type": 1 00:13:48.419 }, 00:13:48.419 { 00:13:48.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.419 "dma_device_type": 2 00:13:48.419 } 00:13:48.419 ], 00:13:48.419 "driver_specific": {} 00:13:48.419 } 00:13:48.419 ] 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.419 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.678 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.678 "name": "Existed_Raid", 00:13:48.678 "uuid": "3f8b9c27-b946-49a2-b638-ef33acf70298", 00:13:48.678 "strip_size_kb": 64, 00:13:48.678 "state": "configuring", 00:13:48.678 "raid_level": "concat", 00:13:48.678 "superblock": true, 00:13:48.678 "num_base_bdevs": 3, 00:13:48.678 "num_base_bdevs_discovered": 2, 00:13:48.678 "num_base_bdevs_operational": 3, 00:13:48.678 "base_bdevs_list": [ 00:13:48.678 { 00:13:48.678 "name": "BaseBdev1", 00:13:48.678 "uuid": "b6646d6d-8e1b-43c7-97b8-8b371e6fcc43", 00:13:48.678 "is_configured": true, 00:13:48.678 "data_offset": 2048, 00:13:48.678 "data_size": 63488 00:13:48.678 }, 00:13:48.678 { 00:13:48.678 "name": "BaseBdev2", 00:13:48.678 "uuid": "e8cb282c-29cb-4880-8abd-c9ca8290e5bd", 00:13:48.678 "is_configured": true, 00:13:48.678 "data_offset": 2048, 00:13:48.678 "data_size": 63488 00:13:48.678 }, 00:13:48.678 { 00:13:48.678 "name": "BaseBdev3", 00:13:48.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.678 "is_configured": false, 00:13:48.678 "data_offset": 0, 00:13:48.678 "data_size": 0 00:13:48.678 } 00:13:48.678 ] 00:13:48.678 }' 00:13:48.678 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.678 10:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.275 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:49.275 [2024-07-25 10:28:52.941005] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.275 [2024-07-25 10:28:52.941242] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x9cdd90 00:13:49.275 [2024-07-25 10:28:52.941262] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:49.275 [2024-07-25 10:28:52.941436] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9d1a90 00:13:49.275 [2024-07-25 10:28:52.941587] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9cdd90 00:13:49.275 [2024-07-25 10:28:52.941603] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x9cdd90 00:13:49.275 [2024-07-25 10:28:52.941715] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.275 BaseBdev3 00:13:49.275 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:49.275 10:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:49.275 10:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:49.275 10:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:49.275 10:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:49.275 10:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:49.275 10:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:49.533 10:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:49.791 [ 00:13:49.791 { 00:13:49.791 "name": "BaseBdev3", 00:13:49.791 "aliases": [ 00:13:49.791 "fd9352f3-da80-4626-ba0f-710434e4a664" 00:13:49.791 ], 00:13:49.791 "product_name": "Malloc disk", 00:13:49.791 "block_size": 512, 00:13:49.791 "num_blocks": 65536, 00:13:49.791 "uuid": "fd9352f3-da80-4626-ba0f-710434e4a664", 00:13:49.791 "assigned_rate_limits": { 00:13:49.791 "rw_ios_per_sec": 0, 00:13:49.791 "rw_mbytes_per_sec": 0, 00:13:49.791 "r_mbytes_per_sec": 0, 00:13:49.791 "w_mbytes_per_sec": 0 00:13:49.791 }, 00:13:49.791 "claimed": true, 00:13:49.791 "claim_type": "exclusive_write", 00:13:49.791 "zoned": false, 00:13:49.791 "supported_io_types": { 00:13:49.791 "read": true, 00:13:49.791 "write": true, 00:13:49.791 "unmap": true, 00:13:49.791 "flush": true, 00:13:49.791 "reset": true, 00:13:49.791 "nvme_admin": false, 00:13:49.791 "nvme_io": false, 00:13:49.791 "nvme_io_md": false, 00:13:49.791 "write_zeroes": true, 00:13:49.791 "zcopy": true, 00:13:49.791 "get_zone_info": false, 00:13:49.791 "zone_management": false, 00:13:49.791 "zone_append": false, 00:13:49.791 "compare": false, 00:13:49.791 "compare_and_write": false, 00:13:49.791 "abort": true, 00:13:49.791 "seek_hole": false, 00:13:49.791 "seek_data": false, 00:13:49.791 "copy": true, 00:13:49.791 "nvme_iov_md": false 00:13:49.791 }, 00:13:49.791 "memory_domains": [ 00:13:49.791 { 00:13:49.791 "dma_device_id": "system", 00:13:49.791 "dma_device_type": 1 00:13:49.791 }, 00:13:49.791 { 00:13:49.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.791 "dma_device_type": 2 00:13:49.791 } 00:13:49.791 ], 00:13:49.791 "driver_specific": {} 00:13:49.791 } 00:13:49.791 ] 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.791 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.047 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:50.047 "name": "Existed_Raid", 00:13:50.047 "uuid": "3f8b9c27-b946-49a2-b638-ef33acf70298", 00:13:50.047 "strip_size_kb": 64, 00:13:50.047 "state": "online", 00:13:50.047 "raid_level": "concat", 00:13:50.047 "superblock": true, 00:13:50.047 "num_base_bdevs": 3, 00:13:50.047 "num_base_bdevs_discovered": 3, 00:13:50.047 "num_base_bdevs_operational": 3, 00:13:50.047 "base_bdevs_list": [ 00:13:50.047 { 00:13:50.047 "name": "BaseBdev1", 00:13:50.047 "uuid": "b6646d6d-8e1b-43c7-97b8-8b371e6fcc43", 00:13:50.047 "is_configured": true, 00:13:50.047 "data_offset": 2048, 00:13:50.047 "data_size": 63488 00:13:50.047 }, 00:13:50.047 { 00:13:50.048 "name": "BaseBdev2", 00:13:50.048 "uuid": "e8cb282c-29cb-4880-8abd-c9ca8290e5bd", 00:13:50.048 "is_configured": true, 00:13:50.048 "data_offset": 2048, 00:13:50.048 "data_size": 63488 00:13:50.048 }, 00:13:50.048 { 00:13:50.048 "name": "BaseBdev3", 00:13:50.048 "uuid": "fd9352f3-da80-4626-ba0f-710434e4a664", 00:13:50.048 "is_configured": true, 00:13:50.048 "data_offset": 2048, 00:13:50.048 "data_size": 63488 00:13:50.048 } 00:13:50.048 ] 00:13:50.048 }' 00:13:50.048 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:50.048 10:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.610 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:50.610 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:50.610 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:50.610 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:50.610 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:50.610 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:50.610 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:50.610 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:50.869 [2024-07-25 10:28:54.453407] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:50.869 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:50.869 "name": "Existed_Raid", 00:13:50.869 "aliases": [ 00:13:50.869 "3f8b9c27-b946-49a2-b638-ef33acf70298" 00:13:50.869 ], 00:13:50.869 "product_name": "Raid Volume", 00:13:50.869 "block_size": 512, 00:13:50.869 "num_blocks": 190464, 00:13:50.869 "uuid": "3f8b9c27-b946-49a2-b638-ef33acf70298", 00:13:50.869 "assigned_rate_limits": { 00:13:50.869 "rw_ios_per_sec": 0, 00:13:50.869 "rw_mbytes_per_sec": 0, 00:13:50.869 "r_mbytes_per_sec": 0, 00:13:50.869 "w_mbytes_per_sec": 0 00:13:50.869 }, 00:13:50.869 "claimed": false, 00:13:50.869 "zoned": false, 00:13:50.869 "supported_io_types": { 00:13:50.869 "read": true, 00:13:50.869 "write": true, 00:13:50.869 "unmap": true, 00:13:50.869 "flush": true, 00:13:50.869 "reset": true, 00:13:50.869 "nvme_admin": false, 00:13:50.869 "nvme_io": false, 00:13:50.869 "nvme_io_md": false, 00:13:50.869 "write_zeroes": true, 00:13:50.869 "zcopy": false, 00:13:50.869 "get_zone_info": false, 00:13:50.869 "zone_management": false, 00:13:50.869 "zone_append": false, 00:13:50.869 "compare": false, 00:13:50.869 "compare_and_write": false, 00:13:50.869 "abort": false, 00:13:50.869 "seek_hole": false, 00:13:50.869 "seek_data": false, 00:13:50.869 "copy": false, 00:13:50.869 "nvme_iov_md": false 00:13:50.869 }, 00:13:50.869 "memory_domains": [ 00:13:50.869 { 00:13:50.869 "dma_device_id": "system", 00:13:50.869 "dma_device_type": 1 00:13:50.869 }, 00:13:50.869 { 00:13:50.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.869 "dma_device_type": 2 00:13:50.869 }, 00:13:50.869 { 00:13:50.869 "dma_device_id": "system", 00:13:50.869 "dma_device_type": 1 00:13:50.869 }, 00:13:50.869 { 00:13:50.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.869 "dma_device_type": 2 00:13:50.869 }, 00:13:50.869 { 00:13:50.869 "dma_device_id": "system", 00:13:50.869 "dma_device_type": 1 00:13:50.869 }, 00:13:50.869 { 00:13:50.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.869 "dma_device_type": 2 00:13:50.869 } 00:13:50.869 ], 00:13:50.869 "driver_specific": { 00:13:50.869 "raid": { 00:13:50.869 "uuid": "3f8b9c27-b946-49a2-b638-ef33acf70298", 00:13:50.869 "strip_size_kb": 64, 00:13:50.869 "state": "online", 00:13:50.869 "raid_level": "concat", 00:13:50.869 "superblock": true, 00:13:50.869 "num_base_bdevs": 3, 00:13:50.869 "num_base_bdevs_discovered": 3, 00:13:50.869 "num_base_bdevs_operational": 3, 00:13:50.869 "base_bdevs_list": [ 00:13:50.869 { 00:13:50.869 "name": "BaseBdev1", 00:13:50.869 "uuid": "b6646d6d-8e1b-43c7-97b8-8b371e6fcc43", 00:13:50.869 "is_configured": true, 00:13:50.869 "data_offset": 2048, 00:13:50.869 "data_size": 63488 00:13:50.869 }, 00:13:50.869 { 00:13:50.869 "name": "BaseBdev2", 00:13:50.869 "uuid": "e8cb282c-29cb-4880-8abd-c9ca8290e5bd", 00:13:50.869 "is_configured": true, 00:13:50.869 "data_offset": 2048, 00:13:50.869 "data_size": 63488 00:13:50.869 }, 00:13:50.869 { 00:13:50.869 "name": "BaseBdev3", 00:13:50.869 "uuid": "fd9352f3-da80-4626-ba0f-710434e4a664", 00:13:50.869 "is_configured": true, 00:13:50.869 "data_offset": 2048, 00:13:50.869 "data_size": 63488 00:13:50.869 } 00:13:50.869 ] 00:13:50.869 } 00:13:50.869 } 00:13:50.869 }' 00:13:50.869 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:50.869 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:50.869 BaseBdev2 00:13:50.869 BaseBdev3' 00:13:50.869 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:50.869 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:50.869 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:51.127 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:51.127 "name": "BaseBdev1", 00:13:51.127 "aliases": [ 00:13:51.127 "b6646d6d-8e1b-43c7-97b8-8b371e6fcc43" 00:13:51.127 ], 00:13:51.127 "product_name": "Malloc disk", 00:13:51.127 "block_size": 512, 00:13:51.127 "num_blocks": 65536, 00:13:51.127 "uuid": "b6646d6d-8e1b-43c7-97b8-8b371e6fcc43", 00:13:51.127 "assigned_rate_limits": { 00:13:51.127 "rw_ios_per_sec": 0, 00:13:51.127 "rw_mbytes_per_sec": 0, 00:13:51.127 "r_mbytes_per_sec": 0, 00:13:51.127 "w_mbytes_per_sec": 0 00:13:51.127 }, 00:13:51.127 "claimed": true, 00:13:51.127 "claim_type": "exclusive_write", 00:13:51.127 "zoned": false, 00:13:51.127 "supported_io_types": { 00:13:51.127 "read": true, 00:13:51.127 "write": true, 00:13:51.127 "unmap": true, 00:13:51.127 "flush": true, 00:13:51.127 "reset": true, 00:13:51.127 "nvme_admin": false, 00:13:51.127 "nvme_io": false, 00:13:51.127 "nvme_io_md": false, 00:13:51.127 "write_zeroes": true, 00:13:51.127 "zcopy": true, 00:13:51.127 "get_zone_info": false, 00:13:51.127 "zone_management": false, 00:13:51.127 "zone_append": false, 00:13:51.127 "compare": false, 00:13:51.127 "compare_and_write": false, 00:13:51.127 "abort": true, 00:13:51.127 "seek_hole": false, 00:13:51.127 "seek_data": false, 00:13:51.127 "copy": true, 00:13:51.127 "nvme_iov_md": false 00:13:51.127 }, 00:13:51.127 "memory_domains": [ 00:13:51.127 { 00:13:51.127 "dma_device_id": "system", 00:13:51.127 "dma_device_type": 1 00:13:51.127 }, 00:13:51.127 { 00:13:51.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.127 "dma_device_type": 2 00:13:51.127 } 00:13:51.127 ], 00:13:51.127 "driver_specific": {} 00:13:51.127 }' 00:13:51.127 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.127 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.127 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:51.127 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.385 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.385 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:51.385 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.385 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.385 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:51.385 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.385 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.385 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:51.385 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:51.385 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:51.385 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:51.643 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:51.643 "name": "BaseBdev2", 00:13:51.643 "aliases": [ 00:13:51.643 "e8cb282c-29cb-4880-8abd-c9ca8290e5bd" 00:13:51.643 ], 00:13:51.643 "product_name": "Malloc disk", 00:13:51.643 "block_size": 512, 00:13:51.643 "num_blocks": 65536, 00:13:51.643 "uuid": "e8cb282c-29cb-4880-8abd-c9ca8290e5bd", 00:13:51.643 "assigned_rate_limits": { 00:13:51.643 "rw_ios_per_sec": 0, 00:13:51.643 "rw_mbytes_per_sec": 0, 00:13:51.643 "r_mbytes_per_sec": 0, 00:13:51.643 "w_mbytes_per_sec": 0 00:13:51.643 }, 00:13:51.643 "claimed": true, 00:13:51.643 "claim_type": "exclusive_write", 00:13:51.643 "zoned": false, 00:13:51.643 "supported_io_types": { 00:13:51.643 "read": true, 00:13:51.643 "write": true, 00:13:51.643 "unmap": true, 00:13:51.643 "flush": true, 00:13:51.643 "reset": true, 00:13:51.643 "nvme_admin": false, 00:13:51.643 "nvme_io": false, 00:13:51.643 "nvme_io_md": false, 00:13:51.643 "write_zeroes": true, 00:13:51.643 "zcopy": true, 00:13:51.643 "get_zone_info": false, 00:13:51.643 "zone_management": false, 00:13:51.643 "zone_append": false, 00:13:51.643 "compare": false, 00:13:51.643 "compare_and_write": false, 00:13:51.643 "abort": true, 00:13:51.643 "seek_hole": false, 00:13:51.643 "seek_data": false, 00:13:51.643 "copy": true, 00:13:51.643 "nvme_iov_md": false 00:13:51.643 }, 00:13:51.643 "memory_domains": [ 00:13:51.643 { 00:13:51.643 "dma_device_id": "system", 00:13:51.643 "dma_device_type": 1 00:13:51.643 }, 00:13:51.643 { 00:13:51.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.643 "dma_device_type": 2 00:13:51.643 } 00:13:51.643 ], 00:13:51.643 "driver_specific": {} 00:13:51.643 }' 00:13:51.643 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.643 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.901 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:51.901 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.901 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.901 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:51.901 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.901 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.901 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:51.901 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.901 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.901 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:51.901 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:51.901 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:51.901 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:52.159 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:52.159 "name": "BaseBdev3", 00:13:52.159 "aliases": [ 00:13:52.159 "fd9352f3-da80-4626-ba0f-710434e4a664" 00:13:52.159 ], 00:13:52.159 "product_name": "Malloc disk", 00:13:52.159 "block_size": 512, 00:13:52.159 "num_blocks": 65536, 00:13:52.159 "uuid": "fd9352f3-da80-4626-ba0f-710434e4a664", 00:13:52.159 "assigned_rate_limits": { 00:13:52.159 "rw_ios_per_sec": 0, 00:13:52.159 "rw_mbytes_per_sec": 0, 00:13:52.159 "r_mbytes_per_sec": 0, 00:13:52.159 "w_mbytes_per_sec": 0 00:13:52.159 }, 00:13:52.159 "claimed": true, 00:13:52.159 "claim_type": "exclusive_write", 00:13:52.159 "zoned": false, 00:13:52.159 "supported_io_types": { 00:13:52.159 "read": true, 00:13:52.159 "write": true, 00:13:52.159 "unmap": true, 00:13:52.159 "flush": true, 00:13:52.159 "reset": true, 00:13:52.159 "nvme_admin": false, 00:13:52.159 "nvme_io": false, 00:13:52.159 "nvme_io_md": false, 00:13:52.159 "write_zeroes": true, 00:13:52.159 "zcopy": true, 00:13:52.159 "get_zone_info": false, 00:13:52.159 "zone_management": false, 00:13:52.159 "zone_append": false, 00:13:52.159 "compare": false, 00:13:52.159 "compare_and_write": false, 00:13:52.159 "abort": true, 00:13:52.159 "seek_hole": false, 00:13:52.159 "seek_data": false, 00:13:52.159 "copy": true, 00:13:52.159 "nvme_iov_md": false 00:13:52.159 }, 00:13:52.159 "memory_domains": [ 00:13:52.159 { 00:13:52.159 "dma_device_id": "system", 00:13:52.159 "dma_device_type": 1 00:13:52.159 }, 00:13:52.159 { 00:13:52.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.159 "dma_device_type": 2 00:13:52.159 } 00:13:52.159 ], 00:13:52.159 "driver_specific": {} 00:13:52.159 }' 00:13:52.159 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.418 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.418 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:52.418 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.418 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.418 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:52.418 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.418 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.418 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:52.418 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.676 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.676 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:52.676 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:52.934 [2024-07-25 10:28:56.386286] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.934 [2024-07-25 10:28:56.386319] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.934 [2024-07-25 10:28:56.386370] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.934 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.192 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:53.192 "name": "Existed_Raid", 00:13:53.192 "uuid": "3f8b9c27-b946-49a2-b638-ef33acf70298", 00:13:53.192 "strip_size_kb": 64, 00:13:53.192 "state": "offline", 00:13:53.192 "raid_level": "concat", 00:13:53.192 "superblock": true, 00:13:53.192 "num_base_bdevs": 3, 00:13:53.192 "num_base_bdevs_discovered": 2, 00:13:53.192 "num_base_bdevs_operational": 2, 00:13:53.192 "base_bdevs_list": [ 00:13:53.192 { 00:13:53.192 "name": null, 00:13:53.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.192 "is_configured": false, 00:13:53.192 "data_offset": 2048, 00:13:53.192 "data_size": 63488 00:13:53.192 }, 00:13:53.192 { 00:13:53.192 "name": "BaseBdev2", 00:13:53.192 "uuid": "e8cb282c-29cb-4880-8abd-c9ca8290e5bd", 00:13:53.192 "is_configured": true, 00:13:53.192 "data_offset": 2048, 00:13:53.192 "data_size": 63488 00:13:53.192 }, 00:13:53.192 { 00:13:53.192 "name": "BaseBdev3", 00:13:53.192 "uuid": "fd9352f3-da80-4626-ba0f-710434e4a664", 00:13:53.192 "is_configured": true, 00:13:53.192 "data_offset": 2048, 00:13:53.192 "data_size": 63488 00:13:53.193 } 00:13:53.193 ] 00:13:53.193 }' 00:13:53.193 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:53.193 10:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.758 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:53.758 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:53.758 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.758 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:54.016 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:54.016 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.016 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:54.273 [2024-07-25 10:28:57.760403] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:54.273 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:54.273 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:54.273 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.273 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:54.531 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:54.531 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.531 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:54.789 [2024-07-25 10:28:58.307452] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:54.789 [2024-07-25 10:28:58.307510] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9cdd90 name Existed_Raid, state offline 00:13:54.789 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:54.789 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:54.789 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.789 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:55.047 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:55.047 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:55.047 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:55.047 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:55.047 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:55.047 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:55.305 BaseBdev2 00:13:55.305 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:55.305 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:55.305 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.305 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:55.305 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.305 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.305 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:55.563 10:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:55.820 [ 00:13:55.820 { 00:13:55.820 "name": "BaseBdev2", 00:13:55.820 "aliases": [ 00:13:55.820 "a3fcdefa-f714-45e0-b5d3-be98fa7f90a2" 00:13:55.820 ], 00:13:55.820 "product_name": "Malloc disk", 00:13:55.820 "block_size": 512, 00:13:55.820 "num_blocks": 65536, 00:13:55.820 "uuid": "a3fcdefa-f714-45e0-b5d3-be98fa7f90a2", 00:13:55.820 "assigned_rate_limits": { 00:13:55.820 "rw_ios_per_sec": 0, 00:13:55.820 "rw_mbytes_per_sec": 0, 00:13:55.820 "r_mbytes_per_sec": 0, 00:13:55.820 "w_mbytes_per_sec": 0 00:13:55.820 }, 00:13:55.820 "claimed": false, 00:13:55.820 "zoned": false, 00:13:55.820 "supported_io_types": { 00:13:55.820 "read": true, 00:13:55.820 "write": true, 00:13:55.820 "unmap": true, 00:13:55.820 "flush": true, 00:13:55.820 "reset": true, 00:13:55.820 "nvme_admin": false, 00:13:55.820 "nvme_io": false, 00:13:55.820 "nvme_io_md": false, 00:13:55.820 "write_zeroes": true, 00:13:55.820 "zcopy": true, 00:13:55.820 "get_zone_info": false, 00:13:55.820 "zone_management": false, 00:13:55.820 "zone_append": false, 00:13:55.820 "compare": false, 00:13:55.820 "compare_and_write": false, 00:13:55.820 "abort": true, 00:13:55.821 "seek_hole": false, 00:13:55.821 "seek_data": false, 00:13:55.821 "copy": true, 00:13:55.821 "nvme_iov_md": false 00:13:55.821 }, 00:13:55.821 "memory_domains": [ 00:13:55.821 { 00:13:55.821 "dma_device_id": "system", 00:13:55.821 "dma_device_type": 1 00:13:55.821 }, 00:13:55.821 { 00:13:55.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.821 "dma_device_type": 2 00:13:55.821 } 00:13:55.821 ], 00:13:55.821 "driver_specific": {} 00:13:55.821 } 00:13:55.821 ] 00:13:55.821 10:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:55.821 10:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:55.821 10:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:55.821 10:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:56.078 BaseBdev3 00:13:56.078 10:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:56.078 10:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:56.078 10:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:56.078 10:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:56.078 10:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:56.078 10:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:56.078 10:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:56.336 10:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:56.594 [ 00:13:56.594 { 00:13:56.594 "name": "BaseBdev3", 00:13:56.594 "aliases": [ 00:13:56.594 "6f4c5647-a725-47f1-a715-2287f2bda878" 00:13:56.594 ], 00:13:56.594 "product_name": "Malloc disk", 00:13:56.594 "block_size": 512, 00:13:56.594 "num_blocks": 65536, 00:13:56.594 "uuid": "6f4c5647-a725-47f1-a715-2287f2bda878", 00:13:56.594 "assigned_rate_limits": { 00:13:56.594 "rw_ios_per_sec": 0, 00:13:56.594 "rw_mbytes_per_sec": 0, 00:13:56.594 "r_mbytes_per_sec": 0, 00:13:56.594 "w_mbytes_per_sec": 0 00:13:56.594 }, 00:13:56.594 "claimed": false, 00:13:56.594 "zoned": false, 00:13:56.594 "supported_io_types": { 00:13:56.594 "read": true, 00:13:56.594 "write": true, 00:13:56.594 "unmap": true, 00:13:56.594 "flush": true, 00:13:56.594 "reset": true, 00:13:56.594 "nvme_admin": false, 00:13:56.594 "nvme_io": false, 00:13:56.594 "nvme_io_md": false, 00:13:56.594 "write_zeroes": true, 00:13:56.594 "zcopy": true, 00:13:56.594 "get_zone_info": false, 00:13:56.594 "zone_management": false, 00:13:56.594 "zone_append": false, 00:13:56.594 "compare": false, 00:13:56.594 "compare_and_write": false, 00:13:56.594 "abort": true, 00:13:56.594 "seek_hole": false, 00:13:56.594 "seek_data": false, 00:13:56.594 "copy": true, 00:13:56.594 "nvme_iov_md": false 00:13:56.594 }, 00:13:56.594 "memory_domains": [ 00:13:56.594 { 00:13:56.594 "dma_device_id": "system", 00:13:56.594 "dma_device_type": 1 00:13:56.594 }, 00:13:56.594 { 00:13:56.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.594 "dma_device_type": 2 00:13:56.594 } 00:13:56.594 ], 00:13:56.594 "driver_specific": {} 00:13:56.594 } 00:13:56.594 ] 00:13:56.594 10:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:56.594 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:56.594 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:56.594 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:56.852 [2024-07-25 10:29:00.500165] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:56.852 [2024-07-25 10:29:00.500209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:56.852 [2024-07-25 10:29:00.500252] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.852 [2024-07-25 10:29:00.501613] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.852 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:56.852 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:56.852 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:56.852 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:56.852 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:56.852 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:56.852 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.852 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.852 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.852 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.852 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.852 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.110 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:57.110 "name": "Existed_Raid", 00:13:57.110 "uuid": "696255da-6120-404e-802c-5aca7e1f4c7a", 00:13:57.110 "strip_size_kb": 64, 00:13:57.110 "state": "configuring", 00:13:57.110 "raid_level": "concat", 00:13:57.110 "superblock": true, 00:13:57.110 "num_base_bdevs": 3, 00:13:57.110 "num_base_bdevs_discovered": 2, 00:13:57.110 "num_base_bdevs_operational": 3, 00:13:57.110 "base_bdevs_list": [ 00:13:57.110 { 00:13:57.110 "name": "BaseBdev1", 00:13:57.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.110 "is_configured": false, 00:13:57.110 "data_offset": 0, 00:13:57.110 "data_size": 0 00:13:57.110 }, 00:13:57.110 { 00:13:57.110 "name": "BaseBdev2", 00:13:57.110 "uuid": "a3fcdefa-f714-45e0-b5d3-be98fa7f90a2", 00:13:57.110 "is_configured": true, 00:13:57.110 "data_offset": 2048, 00:13:57.110 "data_size": 63488 00:13:57.110 }, 00:13:57.110 { 00:13:57.110 "name": "BaseBdev3", 00:13:57.110 "uuid": "6f4c5647-a725-47f1-a715-2287f2bda878", 00:13:57.110 "is_configured": true, 00:13:57.110 "data_offset": 2048, 00:13:57.110 "data_size": 63488 00:13:57.110 } 00:13:57.110 ] 00:13:57.110 }' 00:13:57.110 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:57.110 10:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.675 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:57.932 [2024-07-25 10:29:01.570931] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:57.932 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:57.932 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:57.932 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:57.932 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:57.933 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:57.933 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:57.933 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:57.933 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:57.933 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:57.933 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:57.933 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.933 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.190 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.190 "name": "Existed_Raid", 00:13:58.190 "uuid": "696255da-6120-404e-802c-5aca7e1f4c7a", 00:13:58.190 "strip_size_kb": 64, 00:13:58.190 "state": "configuring", 00:13:58.190 "raid_level": "concat", 00:13:58.190 "superblock": true, 00:13:58.190 "num_base_bdevs": 3, 00:13:58.190 "num_base_bdevs_discovered": 1, 00:13:58.190 "num_base_bdevs_operational": 3, 00:13:58.190 "base_bdevs_list": [ 00:13:58.190 { 00:13:58.190 "name": "BaseBdev1", 00:13:58.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.190 "is_configured": false, 00:13:58.190 "data_offset": 0, 00:13:58.190 "data_size": 0 00:13:58.190 }, 00:13:58.190 { 00:13:58.190 "name": null, 00:13:58.190 "uuid": "a3fcdefa-f714-45e0-b5d3-be98fa7f90a2", 00:13:58.190 "is_configured": false, 00:13:58.190 "data_offset": 2048, 00:13:58.190 "data_size": 63488 00:13:58.190 }, 00:13:58.190 { 00:13:58.190 "name": "BaseBdev3", 00:13:58.190 "uuid": "6f4c5647-a725-47f1-a715-2287f2bda878", 00:13:58.190 "is_configured": true, 00:13:58.190 "data_offset": 2048, 00:13:58.190 "data_size": 63488 00:13:58.190 } 00:13:58.190 ] 00:13:58.190 }' 00:13:58.190 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.190 10:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.754 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.754 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:59.011 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:59.011 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:59.268 [2024-07-25 10:29:02.933369] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.268 BaseBdev1 00:13:59.268 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:59.268 10:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:59.268 10:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:59.268 10:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:59.268 10:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:59.269 10:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:59.269 10:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:59.527 10:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:59.785 [ 00:13:59.785 { 00:13:59.785 "name": "BaseBdev1", 00:13:59.785 "aliases": [ 00:13:59.785 "c4b95151-a363-4b24-b95a-f8647894dd0d" 00:13:59.785 ], 00:13:59.785 "product_name": "Malloc disk", 00:13:59.785 "block_size": 512, 00:13:59.785 "num_blocks": 65536, 00:13:59.785 "uuid": "c4b95151-a363-4b24-b95a-f8647894dd0d", 00:13:59.785 "assigned_rate_limits": { 00:13:59.785 "rw_ios_per_sec": 0, 00:13:59.785 "rw_mbytes_per_sec": 0, 00:13:59.785 "r_mbytes_per_sec": 0, 00:13:59.785 "w_mbytes_per_sec": 0 00:13:59.785 }, 00:13:59.785 "claimed": true, 00:13:59.785 "claim_type": "exclusive_write", 00:13:59.785 "zoned": false, 00:13:59.785 "supported_io_types": { 00:13:59.785 "read": true, 00:13:59.785 "write": true, 00:13:59.785 "unmap": true, 00:13:59.785 "flush": true, 00:13:59.785 "reset": true, 00:13:59.785 "nvme_admin": false, 00:13:59.785 "nvme_io": false, 00:13:59.785 "nvme_io_md": false, 00:13:59.785 "write_zeroes": true, 00:13:59.785 "zcopy": true, 00:13:59.785 "get_zone_info": false, 00:13:59.785 "zone_management": false, 00:13:59.785 "zone_append": false, 00:13:59.785 "compare": false, 00:13:59.785 "compare_and_write": false, 00:13:59.785 "abort": true, 00:13:59.785 "seek_hole": false, 00:13:59.785 "seek_data": false, 00:13:59.785 "copy": true, 00:13:59.785 "nvme_iov_md": false 00:13:59.785 }, 00:13:59.785 "memory_domains": [ 00:13:59.785 { 00:13:59.785 "dma_device_id": "system", 00:13:59.785 "dma_device_type": 1 00:13:59.785 }, 00:13:59.785 { 00:13:59.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.785 "dma_device_type": 2 00:13:59.785 } 00:13:59.785 ], 00:13:59.785 "driver_specific": {} 00:13:59.785 } 00:13:59.785 ] 00:14:00.044 10:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:00.044 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:00.044 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:00.044 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:00.044 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:00.044 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:00.044 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:00.044 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.044 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.044 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.044 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.044 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.044 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.300 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.300 "name": "Existed_Raid", 00:14:00.300 "uuid": "696255da-6120-404e-802c-5aca7e1f4c7a", 00:14:00.300 "strip_size_kb": 64, 00:14:00.300 "state": "configuring", 00:14:00.300 "raid_level": "concat", 00:14:00.300 "superblock": true, 00:14:00.300 "num_base_bdevs": 3, 00:14:00.300 "num_base_bdevs_discovered": 2, 00:14:00.300 "num_base_bdevs_operational": 3, 00:14:00.300 "base_bdevs_list": [ 00:14:00.300 { 00:14:00.300 "name": "BaseBdev1", 00:14:00.300 "uuid": "c4b95151-a363-4b24-b95a-f8647894dd0d", 00:14:00.300 "is_configured": true, 00:14:00.300 "data_offset": 2048, 00:14:00.300 "data_size": 63488 00:14:00.300 }, 00:14:00.300 { 00:14:00.300 "name": null, 00:14:00.300 "uuid": "a3fcdefa-f714-45e0-b5d3-be98fa7f90a2", 00:14:00.300 "is_configured": false, 00:14:00.300 "data_offset": 2048, 00:14:00.300 "data_size": 63488 00:14:00.300 }, 00:14:00.300 { 00:14:00.300 "name": "BaseBdev3", 00:14:00.301 "uuid": "6f4c5647-a725-47f1-a715-2287f2bda878", 00:14:00.301 "is_configured": true, 00:14:00.301 "data_offset": 2048, 00:14:00.301 "data_size": 63488 00:14:00.301 } 00:14:00.301 ] 00:14:00.301 }' 00:14:00.301 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.301 10:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.864 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.864 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:01.121 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:01.121 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:01.379 [2024-07-25 10:29:04.894615] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:01.379 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:01.379 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.379 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:01.379 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:01.379 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.379 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:01.379 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.379 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.379 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.379 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.379 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.379 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.636 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.637 "name": "Existed_Raid", 00:14:01.637 "uuid": "696255da-6120-404e-802c-5aca7e1f4c7a", 00:14:01.637 "strip_size_kb": 64, 00:14:01.637 "state": "configuring", 00:14:01.637 "raid_level": "concat", 00:14:01.637 "superblock": true, 00:14:01.637 "num_base_bdevs": 3, 00:14:01.637 "num_base_bdevs_discovered": 1, 00:14:01.637 "num_base_bdevs_operational": 3, 00:14:01.637 "base_bdevs_list": [ 00:14:01.637 { 00:14:01.637 "name": "BaseBdev1", 00:14:01.637 "uuid": "c4b95151-a363-4b24-b95a-f8647894dd0d", 00:14:01.637 "is_configured": true, 00:14:01.637 "data_offset": 2048, 00:14:01.637 "data_size": 63488 00:14:01.637 }, 00:14:01.637 { 00:14:01.637 "name": null, 00:14:01.637 "uuid": "a3fcdefa-f714-45e0-b5d3-be98fa7f90a2", 00:14:01.637 "is_configured": false, 00:14:01.637 "data_offset": 2048, 00:14:01.637 "data_size": 63488 00:14:01.637 }, 00:14:01.637 { 00:14:01.637 "name": null, 00:14:01.637 "uuid": "6f4c5647-a725-47f1-a715-2287f2bda878", 00:14:01.637 "is_configured": false, 00:14:01.637 "data_offset": 2048, 00:14:01.637 "data_size": 63488 00:14:01.637 } 00:14:01.637 ] 00:14:01.637 }' 00:14:01.637 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.637 10:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.201 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.201 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:02.459 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:02.459 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:02.716 [2024-07-25 10:29:06.266247] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.716 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:02.716 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:02.717 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:02.717 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:02.717 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:02.717 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:02.717 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:02.717 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:02.717 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:02.717 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:02.717 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.717 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.974 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:02.974 "name": "Existed_Raid", 00:14:02.974 "uuid": "696255da-6120-404e-802c-5aca7e1f4c7a", 00:14:02.974 "strip_size_kb": 64, 00:14:02.974 "state": "configuring", 00:14:02.974 "raid_level": "concat", 00:14:02.974 "superblock": true, 00:14:02.974 "num_base_bdevs": 3, 00:14:02.974 "num_base_bdevs_discovered": 2, 00:14:02.974 "num_base_bdevs_operational": 3, 00:14:02.974 "base_bdevs_list": [ 00:14:02.974 { 00:14:02.975 "name": "BaseBdev1", 00:14:02.975 "uuid": "c4b95151-a363-4b24-b95a-f8647894dd0d", 00:14:02.975 "is_configured": true, 00:14:02.975 "data_offset": 2048, 00:14:02.975 "data_size": 63488 00:14:02.975 }, 00:14:02.975 { 00:14:02.975 "name": null, 00:14:02.975 "uuid": "a3fcdefa-f714-45e0-b5d3-be98fa7f90a2", 00:14:02.975 "is_configured": false, 00:14:02.975 "data_offset": 2048, 00:14:02.975 "data_size": 63488 00:14:02.975 }, 00:14:02.975 { 00:14:02.975 "name": "BaseBdev3", 00:14:02.975 "uuid": "6f4c5647-a725-47f1-a715-2287f2bda878", 00:14:02.975 "is_configured": true, 00:14:02.975 "data_offset": 2048, 00:14:02.975 "data_size": 63488 00:14:02.975 } 00:14:02.975 ] 00:14:02.975 }' 00:14:02.975 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:02.975 10:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.540 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.540 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:03.797 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:03.797 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:04.055 [2024-07-25 10:29:07.613884] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:04.055 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:04.055 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:04.055 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:04.055 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:04.055 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:04.055 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:04.055 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:04.055 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:04.055 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:04.055 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:04.055 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.055 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.313 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.313 "name": "Existed_Raid", 00:14:04.313 "uuid": "696255da-6120-404e-802c-5aca7e1f4c7a", 00:14:04.313 "strip_size_kb": 64, 00:14:04.313 "state": "configuring", 00:14:04.313 "raid_level": "concat", 00:14:04.313 "superblock": true, 00:14:04.313 "num_base_bdevs": 3, 00:14:04.313 "num_base_bdevs_discovered": 1, 00:14:04.313 "num_base_bdevs_operational": 3, 00:14:04.313 "base_bdevs_list": [ 00:14:04.313 { 00:14:04.313 "name": null, 00:14:04.313 "uuid": "c4b95151-a363-4b24-b95a-f8647894dd0d", 00:14:04.313 "is_configured": false, 00:14:04.313 "data_offset": 2048, 00:14:04.313 "data_size": 63488 00:14:04.313 }, 00:14:04.313 { 00:14:04.313 "name": null, 00:14:04.314 "uuid": "a3fcdefa-f714-45e0-b5d3-be98fa7f90a2", 00:14:04.314 "is_configured": false, 00:14:04.314 "data_offset": 2048, 00:14:04.314 "data_size": 63488 00:14:04.314 }, 00:14:04.314 { 00:14:04.314 "name": "BaseBdev3", 00:14:04.314 "uuid": "6f4c5647-a725-47f1-a715-2287f2bda878", 00:14:04.314 "is_configured": true, 00:14:04.314 "data_offset": 2048, 00:14:04.314 "data_size": 63488 00:14:04.314 } 00:14:04.314 ] 00:14:04.314 }' 00:14:04.314 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.314 10:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.878 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.878 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:05.137 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:05.137 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:05.400 [2024-07-25 10:29:09.032460] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.400 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:05.400 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:05.400 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:05.401 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:05.401 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:05.401 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:05.401 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.401 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.401 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.401 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.401 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.401 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.704 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:05.704 "name": "Existed_Raid", 00:14:05.704 "uuid": "696255da-6120-404e-802c-5aca7e1f4c7a", 00:14:05.704 "strip_size_kb": 64, 00:14:05.704 "state": "configuring", 00:14:05.704 "raid_level": "concat", 00:14:05.704 "superblock": true, 00:14:05.704 "num_base_bdevs": 3, 00:14:05.704 "num_base_bdevs_discovered": 2, 00:14:05.704 "num_base_bdevs_operational": 3, 00:14:05.704 "base_bdevs_list": [ 00:14:05.704 { 00:14:05.704 "name": null, 00:14:05.704 "uuid": "c4b95151-a363-4b24-b95a-f8647894dd0d", 00:14:05.704 "is_configured": false, 00:14:05.704 "data_offset": 2048, 00:14:05.704 "data_size": 63488 00:14:05.704 }, 00:14:05.704 { 00:14:05.704 "name": "BaseBdev2", 00:14:05.704 "uuid": "a3fcdefa-f714-45e0-b5d3-be98fa7f90a2", 00:14:05.704 "is_configured": true, 00:14:05.704 "data_offset": 2048, 00:14:05.704 "data_size": 63488 00:14:05.704 }, 00:14:05.704 { 00:14:05.704 "name": "BaseBdev3", 00:14:05.704 "uuid": "6f4c5647-a725-47f1-a715-2287f2bda878", 00:14:05.704 "is_configured": true, 00:14:05.704 "data_offset": 2048, 00:14:05.704 "data_size": 63488 00:14:05.704 } 00:14:05.704 ] 00:14:05.704 }' 00:14:05.704 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:05.704 10:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.270 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.270 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:06.528 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:06.528 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.528 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:06.786 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c4b95151-a363-4b24-b95a-f8647894dd0d 00:14:07.044 [2024-07-25 10:29:10.718308] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:07.044 [2024-07-25 10:29:10.718528] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x9c3d90 00:14:07.044 [2024-07-25 10:29:10.718558] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:07.044 [2024-07-25 10:29:10.718703] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9d1a90 00:14:07.044 [2024-07-25 10:29:10.718816] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9c3d90 00:14:07.044 [2024-07-25 10:29:10.718839] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x9c3d90 00:14:07.044 [2024-07-25 10:29:10.718928] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.044 NewBaseBdev 00:14:07.044 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:07.044 10:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:07.044 10:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:07.044 10:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:07.044 10:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:07.044 10:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:07.044 10:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:07.301 10:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:07.559 [ 00:14:07.559 { 00:14:07.559 "name": "NewBaseBdev", 00:14:07.559 "aliases": [ 00:14:07.559 "c4b95151-a363-4b24-b95a-f8647894dd0d" 00:14:07.559 ], 00:14:07.559 "product_name": "Malloc disk", 00:14:07.559 "block_size": 512, 00:14:07.559 "num_blocks": 65536, 00:14:07.559 "uuid": "c4b95151-a363-4b24-b95a-f8647894dd0d", 00:14:07.559 "assigned_rate_limits": { 00:14:07.559 "rw_ios_per_sec": 0, 00:14:07.559 "rw_mbytes_per_sec": 0, 00:14:07.559 "r_mbytes_per_sec": 0, 00:14:07.559 "w_mbytes_per_sec": 0 00:14:07.559 }, 00:14:07.559 "claimed": true, 00:14:07.559 "claim_type": "exclusive_write", 00:14:07.559 "zoned": false, 00:14:07.559 "supported_io_types": { 00:14:07.559 "read": true, 00:14:07.559 "write": true, 00:14:07.560 "unmap": true, 00:14:07.560 "flush": true, 00:14:07.560 "reset": true, 00:14:07.560 "nvme_admin": false, 00:14:07.560 "nvme_io": false, 00:14:07.560 "nvme_io_md": false, 00:14:07.560 "write_zeroes": true, 00:14:07.560 "zcopy": true, 00:14:07.560 "get_zone_info": false, 00:14:07.560 "zone_management": false, 00:14:07.560 "zone_append": false, 00:14:07.560 "compare": false, 00:14:07.560 "compare_and_write": false, 00:14:07.560 "abort": true, 00:14:07.560 "seek_hole": false, 00:14:07.560 "seek_data": false, 00:14:07.560 "copy": true, 00:14:07.560 "nvme_iov_md": false 00:14:07.560 }, 00:14:07.560 "memory_domains": [ 00:14:07.560 { 00:14:07.560 "dma_device_id": "system", 00:14:07.560 "dma_device_type": 1 00:14:07.560 }, 00:14:07.560 { 00:14:07.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.560 "dma_device_type": 2 00:14:07.560 } 00:14:07.560 ], 00:14:07.560 "driver_specific": {} 00:14:07.560 } 00:14:07.560 ] 00:14:07.560 10:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:07.560 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:07.560 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:07.560 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:07.560 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:07.560 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:07.560 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:07.560 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.560 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.560 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.560 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.560 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.560 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.817 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:07.817 "name": "Existed_Raid", 00:14:07.817 "uuid": "696255da-6120-404e-802c-5aca7e1f4c7a", 00:14:07.817 "strip_size_kb": 64, 00:14:07.817 "state": "online", 00:14:07.817 "raid_level": "concat", 00:14:07.817 "superblock": true, 00:14:07.817 "num_base_bdevs": 3, 00:14:07.817 "num_base_bdevs_discovered": 3, 00:14:07.817 "num_base_bdevs_operational": 3, 00:14:07.817 "base_bdevs_list": [ 00:14:07.817 { 00:14:07.817 "name": "NewBaseBdev", 00:14:07.817 "uuid": "c4b95151-a363-4b24-b95a-f8647894dd0d", 00:14:07.817 "is_configured": true, 00:14:07.817 "data_offset": 2048, 00:14:07.817 "data_size": 63488 00:14:07.817 }, 00:14:07.817 { 00:14:07.817 "name": "BaseBdev2", 00:14:07.817 "uuid": "a3fcdefa-f714-45e0-b5d3-be98fa7f90a2", 00:14:07.817 "is_configured": true, 00:14:07.817 "data_offset": 2048, 00:14:07.817 "data_size": 63488 00:14:07.817 }, 00:14:07.817 { 00:14:07.817 "name": "BaseBdev3", 00:14:07.817 "uuid": "6f4c5647-a725-47f1-a715-2287f2bda878", 00:14:07.817 "is_configured": true, 00:14:07.817 "data_offset": 2048, 00:14:07.817 "data_size": 63488 00:14:07.817 } 00:14:07.817 ] 00:14:07.817 }' 00:14:07.817 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:07.817 10:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.383 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:08.383 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:08.383 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:08.383 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:08.383 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:08.383 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:08.383 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:08.383 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:08.641 [2024-07-25 10:29:12.242690] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.642 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:08.642 "name": "Existed_Raid", 00:14:08.642 "aliases": [ 00:14:08.642 "696255da-6120-404e-802c-5aca7e1f4c7a" 00:14:08.642 ], 00:14:08.642 "product_name": "Raid Volume", 00:14:08.642 "block_size": 512, 00:14:08.642 "num_blocks": 190464, 00:14:08.642 "uuid": "696255da-6120-404e-802c-5aca7e1f4c7a", 00:14:08.642 "assigned_rate_limits": { 00:14:08.642 "rw_ios_per_sec": 0, 00:14:08.642 "rw_mbytes_per_sec": 0, 00:14:08.642 "r_mbytes_per_sec": 0, 00:14:08.642 "w_mbytes_per_sec": 0 00:14:08.642 }, 00:14:08.642 "claimed": false, 00:14:08.642 "zoned": false, 00:14:08.642 "supported_io_types": { 00:14:08.642 "read": true, 00:14:08.642 "write": true, 00:14:08.642 "unmap": true, 00:14:08.642 "flush": true, 00:14:08.642 "reset": true, 00:14:08.642 "nvme_admin": false, 00:14:08.642 "nvme_io": false, 00:14:08.642 "nvme_io_md": false, 00:14:08.642 "write_zeroes": true, 00:14:08.642 "zcopy": false, 00:14:08.642 "get_zone_info": false, 00:14:08.642 "zone_management": false, 00:14:08.642 "zone_append": false, 00:14:08.642 "compare": false, 00:14:08.642 "compare_and_write": false, 00:14:08.642 "abort": false, 00:14:08.642 "seek_hole": false, 00:14:08.642 "seek_data": false, 00:14:08.642 "copy": false, 00:14:08.642 "nvme_iov_md": false 00:14:08.642 }, 00:14:08.642 "memory_domains": [ 00:14:08.642 { 00:14:08.642 "dma_device_id": "system", 00:14:08.642 "dma_device_type": 1 00:14:08.642 }, 00:14:08.642 { 00:14:08.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.642 "dma_device_type": 2 00:14:08.642 }, 00:14:08.642 { 00:14:08.642 "dma_device_id": "system", 00:14:08.642 "dma_device_type": 1 00:14:08.642 }, 00:14:08.642 { 00:14:08.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.642 "dma_device_type": 2 00:14:08.642 }, 00:14:08.642 { 00:14:08.642 "dma_device_id": "system", 00:14:08.642 "dma_device_type": 1 00:14:08.642 }, 00:14:08.642 { 00:14:08.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.642 "dma_device_type": 2 00:14:08.642 } 00:14:08.642 ], 00:14:08.642 "driver_specific": { 00:14:08.642 "raid": { 00:14:08.642 "uuid": "696255da-6120-404e-802c-5aca7e1f4c7a", 00:14:08.642 "strip_size_kb": 64, 00:14:08.642 "state": "online", 00:14:08.642 "raid_level": "concat", 00:14:08.642 "superblock": true, 00:14:08.642 "num_base_bdevs": 3, 00:14:08.642 "num_base_bdevs_discovered": 3, 00:14:08.642 "num_base_bdevs_operational": 3, 00:14:08.642 "base_bdevs_list": [ 00:14:08.642 { 00:14:08.642 "name": "NewBaseBdev", 00:14:08.642 "uuid": "c4b95151-a363-4b24-b95a-f8647894dd0d", 00:14:08.642 "is_configured": true, 00:14:08.642 "data_offset": 2048, 00:14:08.642 "data_size": 63488 00:14:08.642 }, 00:14:08.642 { 00:14:08.642 "name": "BaseBdev2", 00:14:08.642 "uuid": "a3fcdefa-f714-45e0-b5d3-be98fa7f90a2", 00:14:08.642 "is_configured": true, 00:14:08.642 "data_offset": 2048, 00:14:08.642 "data_size": 63488 00:14:08.642 }, 00:14:08.642 { 00:14:08.642 "name": "BaseBdev3", 00:14:08.642 "uuid": "6f4c5647-a725-47f1-a715-2287f2bda878", 00:14:08.642 "is_configured": true, 00:14:08.642 "data_offset": 2048, 00:14:08.642 "data_size": 63488 00:14:08.642 } 00:14:08.642 ] 00:14:08.642 } 00:14:08.642 } 00:14:08.642 }' 00:14:08.642 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.642 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:08.642 BaseBdev2 00:14:08.642 BaseBdev3' 00:14:08.642 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:08.642 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:08.642 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:08.900 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:08.900 "name": "NewBaseBdev", 00:14:08.900 "aliases": [ 00:14:08.900 "c4b95151-a363-4b24-b95a-f8647894dd0d" 00:14:08.900 ], 00:14:08.900 "product_name": "Malloc disk", 00:14:08.900 "block_size": 512, 00:14:08.900 "num_blocks": 65536, 00:14:08.900 "uuid": "c4b95151-a363-4b24-b95a-f8647894dd0d", 00:14:08.900 "assigned_rate_limits": { 00:14:08.900 "rw_ios_per_sec": 0, 00:14:08.900 "rw_mbytes_per_sec": 0, 00:14:08.900 "r_mbytes_per_sec": 0, 00:14:08.900 "w_mbytes_per_sec": 0 00:14:08.900 }, 00:14:08.900 "claimed": true, 00:14:08.900 "claim_type": "exclusive_write", 00:14:08.900 "zoned": false, 00:14:08.900 "supported_io_types": { 00:14:08.900 "read": true, 00:14:08.900 "write": true, 00:14:08.900 "unmap": true, 00:14:08.900 "flush": true, 00:14:08.900 "reset": true, 00:14:08.900 "nvme_admin": false, 00:14:08.900 "nvme_io": false, 00:14:08.900 "nvme_io_md": false, 00:14:08.900 "write_zeroes": true, 00:14:08.900 "zcopy": true, 00:14:08.900 "get_zone_info": false, 00:14:08.900 "zone_management": false, 00:14:08.900 "zone_append": false, 00:14:08.900 "compare": false, 00:14:08.900 "compare_and_write": false, 00:14:08.900 "abort": true, 00:14:08.900 "seek_hole": false, 00:14:08.900 "seek_data": false, 00:14:08.900 "copy": true, 00:14:08.900 "nvme_iov_md": false 00:14:08.900 }, 00:14:08.900 "memory_domains": [ 00:14:08.900 { 00:14:08.900 "dma_device_id": "system", 00:14:08.900 "dma_device_type": 1 00:14:08.900 }, 00:14:08.900 { 00:14:08.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.900 "dma_device_type": 2 00:14:08.900 } 00:14:08.900 ], 00:14:08.900 "driver_specific": {} 00:14:08.900 }' 00:14:08.900 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.900 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.158 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:09.158 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.158 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.158 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:09.158 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.158 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.158 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.158 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.158 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.158 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:09.158 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:09.158 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:09.158 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:09.416 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:09.416 "name": "BaseBdev2", 00:14:09.416 "aliases": [ 00:14:09.416 "a3fcdefa-f714-45e0-b5d3-be98fa7f90a2" 00:14:09.416 ], 00:14:09.416 "product_name": "Malloc disk", 00:14:09.416 "block_size": 512, 00:14:09.416 "num_blocks": 65536, 00:14:09.416 "uuid": "a3fcdefa-f714-45e0-b5d3-be98fa7f90a2", 00:14:09.416 "assigned_rate_limits": { 00:14:09.416 "rw_ios_per_sec": 0, 00:14:09.416 "rw_mbytes_per_sec": 0, 00:14:09.416 "r_mbytes_per_sec": 0, 00:14:09.416 "w_mbytes_per_sec": 0 00:14:09.416 }, 00:14:09.416 "claimed": true, 00:14:09.416 "claim_type": "exclusive_write", 00:14:09.416 "zoned": false, 00:14:09.416 "supported_io_types": { 00:14:09.416 "read": true, 00:14:09.416 "write": true, 00:14:09.416 "unmap": true, 00:14:09.416 "flush": true, 00:14:09.416 "reset": true, 00:14:09.416 "nvme_admin": false, 00:14:09.416 "nvme_io": false, 00:14:09.416 "nvme_io_md": false, 00:14:09.416 "write_zeroes": true, 00:14:09.416 "zcopy": true, 00:14:09.416 "get_zone_info": false, 00:14:09.416 "zone_management": false, 00:14:09.416 "zone_append": false, 00:14:09.416 "compare": false, 00:14:09.416 "compare_and_write": false, 00:14:09.416 "abort": true, 00:14:09.416 "seek_hole": false, 00:14:09.416 "seek_data": false, 00:14:09.416 "copy": true, 00:14:09.416 "nvme_iov_md": false 00:14:09.416 }, 00:14:09.416 "memory_domains": [ 00:14:09.416 { 00:14:09.416 "dma_device_id": "system", 00:14:09.416 "dma_device_type": 1 00:14:09.416 }, 00:14:09.416 { 00:14:09.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.416 "dma_device_type": 2 00:14:09.416 } 00:14:09.416 ], 00:14:09.416 "driver_specific": {} 00:14:09.416 }' 00:14:09.416 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.674 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.674 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:09.674 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.674 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.674 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:09.674 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.674 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.674 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.674 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.674 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.933 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:09.933 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:09.933 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:09.933 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:10.191 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:10.191 "name": "BaseBdev3", 00:14:10.191 "aliases": [ 00:14:10.191 "6f4c5647-a725-47f1-a715-2287f2bda878" 00:14:10.191 ], 00:14:10.191 "product_name": "Malloc disk", 00:14:10.191 "block_size": 512, 00:14:10.191 "num_blocks": 65536, 00:14:10.191 "uuid": "6f4c5647-a725-47f1-a715-2287f2bda878", 00:14:10.191 "assigned_rate_limits": { 00:14:10.191 "rw_ios_per_sec": 0, 00:14:10.191 "rw_mbytes_per_sec": 0, 00:14:10.191 "r_mbytes_per_sec": 0, 00:14:10.191 "w_mbytes_per_sec": 0 00:14:10.191 }, 00:14:10.191 "claimed": true, 00:14:10.191 "claim_type": "exclusive_write", 00:14:10.191 "zoned": false, 00:14:10.191 "supported_io_types": { 00:14:10.191 "read": true, 00:14:10.191 "write": true, 00:14:10.191 "unmap": true, 00:14:10.191 "flush": true, 00:14:10.191 "reset": true, 00:14:10.191 "nvme_admin": false, 00:14:10.191 "nvme_io": false, 00:14:10.191 "nvme_io_md": false, 00:14:10.191 "write_zeroes": true, 00:14:10.191 "zcopy": true, 00:14:10.191 "get_zone_info": false, 00:14:10.191 "zone_management": false, 00:14:10.191 "zone_append": false, 00:14:10.191 "compare": false, 00:14:10.191 "compare_and_write": false, 00:14:10.191 "abort": true, 00:14:10.191 "seek_hole": false, 00:14:10.191 "seek_data": false, 00:14:10.191 "copy": true, 00:14:10.191 "nvme_iov_md": false 00:14:10.191 }, 00:14:10.191 "memory_domains": [ 00:14:10.191 { 00:14:10.191 "dma_device_id": "system", 00:14:10.191 "dma_device_type": 1 00:14:10.191 }, 00:14:10.191 { 00:14:10.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.191 "dma_device_type": 2 00:14:10.191 } 00:14:10.191 ], 00:14:10.191 "driver_specific": {} 00:14:10.191 }' 00:14:10.191 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.191 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.191 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:10.191 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:10.191 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:10.191 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:10.191 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:10.191 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:10.191 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:10.191 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:10.449 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:10.449 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:10.449 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:10.707 [2024-07-25 10:29:14.187603] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:10.707 [2024-07-25 10:29:14.187629] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.707 [2024-07-25 10:29:14.187700] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.707 [2024-07-25 10:29:14.187769] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.707 [2024-07-25 10:29:14.187786] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9c3d90 name Existed_Raid, state offline 00:14:10.707 10:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2367308 00:14:10.707 10:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2367308 ']' 00:14:10.707 10:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2367308 00:14:10.707 10:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:10.707 10:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:10.707 10:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2367308 00:14:10.707 10:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:10.707 10:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:10.707 10:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2367308' 00:14:10.707 killing process with pid 2367308 00:14:10.707 10:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2367308 00:14:10.707 [2024-07-25 10:29:14.236154] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.707 10:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2367308 00:14:10.707 [2024-07-25 10:29:14.273298] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.966 10:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:10.966 00:14:10.966 real 0m28.832s 00:14:10.966 user 0m53.781s 00:14:10.966 sys 0m3.883s 00:14:10.966 10:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.966 10:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.966 ************************************ 00:14:10.966 END TEST raid_state_function_test_sb 00:14:10.966 ************************************ 00:14:10.966 10:29:14 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:14:10.966 10:29:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:10.966 10:29:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.966 10:29:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.966 ************************************ 00:14:10.966 START TEST raid_superblock_test 00:14:10.966 ************************************ 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2371352 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2371352 /var/tmp/spdk-raid.sock 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2371352 ']' 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:10.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.966 10:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.966 [2024-07-25 10:29:14.663284] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:14:10.966 [2024-07-25 10:29:14.663378] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371352 ] 00:14:11.224 [2024-07-25 10:29:14.742461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.224 [2024-07-25 10:29:14.850381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.224 [2024-07-25 10:29:14.919536] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.224 [2024-07-25 10:29:14.919578] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.156 10:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.156 10:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:12.156 10:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:12.156 10:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:12.156 10:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:12.156 10:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:12.156 10:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:12.156 10:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:12.156 10:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:12.156 10:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:12.156 10:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:12.414 malloc1 00:14:12.414 10:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:12.673 [2024-07-25 10:29:16.207640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:12.673 [2024-07-25 10:29:16.207707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.673 [2024-07-25 10:29:16.207737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25692b0 00:14:12.673 [2024-07-25 10:29:16.207754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.673 [2024-07-25 10:29:16.209677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.673 [2024-07-25 10:29:16.209706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:12.673 pt1 00:14:12.673 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:12.673 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:12.673 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:12.673 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:12.673 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:12.673 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:12.673 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:12.673 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:12.673 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:12.931 malloc2 00:14:12.931 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:13.189 [2024-07-25 10:29:16.701052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:13.189 [2024-07-25 10:29:16.701127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.189 [2024-07-25 10:29:16.701165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x271c1e0 00:14:13.189 [2024-07-25 10:29:16.701179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.189 [2024-07-25 10:29:16.702833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.189 [2024-07-25 10:29:16.702861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:13.189 pt2 00:14:13.189 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:13.189 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:13.189 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:14:13.189 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:14:13.189 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:13.189 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:13.189 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:13.189 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:13.189 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:13.447 malloc3 00:14:13.447 10:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:13.705 [2024-07-25 10:29:17.266646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:13.705 [2024-07-25 10:29:17.266714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.705 [2024-07-25 10:29:17.266741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27024d0 00:14:13.705 [2024-07-25 10:29:17.266757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.705 [2024-07-25 10:29:17.268575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.705 [2024-07-25 10:29:17.268603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:13.705 pt3 00:14:13.705 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:13.705 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:13.705 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:13.963 [2024-07-25 10:29:17.559451] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:13.963 [2024-07-25 10:29:17.560896] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:13.963 [2024-07-25 10:29:17.560963] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:13.963 [2024-07-25 10:29:17.561177] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2701120 00:14:13.963 [2024-07-25 10:29:17.561195] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:13.963 [2024-07-25 10:29:17.561427] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2560dc0 00:14:13.964 [2024-07-25 10:29:17.561618] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2701120 00:14:13.964 [2024-07-25 10:29:17.561634] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2701120 00:14:13.964 [2024-07-25 10:29:17.561768] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.964 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:13.964 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:13.964 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:13.964 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:13.964 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:13.964 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:13.964 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.964 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.964 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.964 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.964 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.964 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.221 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.221 "name": "raid_bdev1", 00:14:14.221 "uuid": "f12923ca-59a7-48a0-a399-0fe55112e3ca", 00:14:14.221 "strip_size_kb": 64, 00:14:14.221 "state": "online", 00:14:14.221 "raid_level": "concat", 00:14:14.221 "superblock": true, 00:14:14.221 "num_base_bdevs": 3, 00:14:14.221 "num_base_bdevs_discovered": 3, 00:14:14.221 "num_base_bdevs_operational": 3, 00:14:14.221 "base_bdevs_list": [ 00:14:14.221 { 00:14:14.221 "name": "pt1", 00:14:14.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.221 "is_configured": true, 00:14:14.221 "data_offset": 2048, 00:14:14.221 "data_size": 63488 00:14:14.221 }, 00:14:14.221 { 00:14:14.221 "name": "pt2", 00:14:14.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.221 "is_configured": true, 00:14:14.222 "data_offset": 2048, 00:14:14.222 "data_size": 63488 00:14:14.222 }, 00:14:14.222 { 00:14:14.222 "name": "pt3", 00:14:14.222 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.222 "is_configured": true, 00:14:14.222 "data_offset": 2048, 00:14:14.222 "data_size": 63488 00:14:14.222 } 00:14:14.222 ] 00:14:14.222 }' 00:14:14.222 10:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.222 10:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.786 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:14.786 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:14.786 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:14.786 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:14.786 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:14.786 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:14.786 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:14.786 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:15.044 [2024-07-25 10:29:18.622493] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.044 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:15.044 "name": "raid_bdev1", 00:14:15.044 "aliases": [ 00:14:15.044 "f12923ca-59a7-48a0-a399-0fe55112e3ca" 00:14:15.044 ], 00:14:15.044 "product_name": "Raid Volume", 00:14:15.044 "block_size": 512, 00:14:15.044 "num_blocks": 190464, 00:14:15.044 "uuid": "f12923ca-59a7-48a0-a399-0fe55112e3ca", 00:14:15.044 "assigned_rate_limits": { 00:14:15.044 "rw_ios_per_sec": 0, 00:14:15.044 "rw_mbytes_per_sec": 0, 00:14:15.044 "r_mbytes_per_sec": 0, 00:14:15.044 "w_mbytes_per_sec": 0 00:14:15.044 }, 00:14:15.044 "claimed": false, 00:14:15.044 "zoned": false, 00:14:15.044 "supported_io_types": { 00:14:15.044 "read": true, 00:14:15.044 "write": true, 00:14:15.044 "unmap": true, 00:14:15.044 "flush": true, 00:14:15.044 "reset": true, 00:14:15.044 "nvme_admin": false, 00:14:15.044 "nvme_io": false, 00:14:15.044 "nvme_io_md": false, 00:14:15.044 "write_zeroes": true, 00:14:15.044 "zcopy": false, 00:14:15.044 "get_zone_info": false, 00:14:15.044 "zone_management": false, 00:14:15.044 "zone_append": false, 00:14:15.044 "compare": false, 00:14:15.044 "compare_and_write": false, 00:14:15.044 "abort": false, 00:14:15.044 "seek_hole": false, 00:14:15.044 "seek_data": false, 00:14:15.044 "copy": false, 00:14:15.044 "nvme_iov_md": false 00:14:15.044 }, 00:14:15.044 "memory_domains": [ 00:14:15.044 { 00:14:15.044 "dma_device_id": "system", 00:14:15.044 "dma_device_type": 1 00:14:15.044 }, 00:14:15.044 { 00:14:15.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.044 "dma_device_type": 2 00:14:15.044 }, 00:14:15.044 { 00:14:15.044 "dma_device_id": "system", 00:14:15.044 "dma_device_type": 1 00:14:15.044 }, 00:14:15.044 { 00:14:15.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.044 "dma_device_type": 2 00:14:15.044 }, 00:14:15.044 { 00:14:15.044 "dma_device_id": "system", 00:14:15.044 "dma_device_type": 1 00:14:15.044 }, 00:14:15.044 { 00:14:15.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.044 "dma_device_type": 2 00:14:15.044 } 00:14:15.044 ], 00:14:15.044 "driver_specific": { 00:14:15.044 "raid": { 00:14:15.044 "uuid": "f12923ca-59a7-48a0-a399-0fe55112e3ca", 00:14:15.044 "strip_size_kb": 64, 00:14:15.044 "state": "online", 00:14:15.044 "raid_level": "concat", 00:14:15.044 "superblock": true, 00:14:15.044 "num_base_bdevs": 3, 00:14:15.044 "num_base_bdevs_discovered": 3, 00:14:15.044 "num_base_bdevs_operational": 3, 00:14:15.044 "base_bdevs_list": [ 00:14:15.044 { 00:14:15.044 "name": "pt1", 00:14:15.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.044 "is_configured": true, 00:14:15.044 "data_offset": 2048, 00:14:15.044 "data_size": 63488 00:14:15.044 }, 00:14:15.044 { 00:14:15.044 "name": "pt2", 00:14:15.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.044 "is_configured": true, 00:14:15.044 "data_offset": 2048, 00:14:15.044 "data_size": 63488 00:14:15.044 }, 00:14:15.044 { 00:14:15.044 "name": "pt3", 00:14:15.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.044 "is_configured": true, 00:14:15.044 "data_offset": 2048, 00:14:15.044 "data_size": 63488 00:14:15.044 } 00:14:15.044 ] 00:14:15.044 } 00:14:15.044 } 00:14:15.044 }' 00:14:15.044 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:15.044 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:15.044 pt2 00:14:15.044 pt3' 00:14:15.044 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:15.044 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:15.044 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:15.301 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:15.301 "name": "pt1", 00:14:15.301 "aliases": [ 00:14:15.301 "00000000-0000-0000-0000-000000000001" 00:14:15.301 ], 00:14:15.301 "product_name": "passthru", 00:14:15.301 "block_size": 512, 00:14:15.301 "num_blocks": 65536, 00:14:15.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.301 "assigned_rate_limits": { 00:14:15.301 "rw_ios_per_sec": 0, 00:14:15.301 "rw_mbytes_per_sec": 0, 00:14:15.301 "r_mbytes_per_sec": 0, 00:14:15.301 "w_mbytes_per_sec": 0 00:14:15.301 }, 00:14:15.301 "claimed": true, 00:14:15.301 "claim_type": "exclusive_write", 00:14:15.301 "zoned": false, 00:14:15.301 "supported_io_types": { 00:14:15.301 "read": true, 00:14:15.301 "write": true, 00:14:15.302 "unmap": true, 00:14:15.302 "flush": true, 00:14:15.302 "reset": true, 00:14:15.302 "nvme_admin": false, 00:14:15.302 "nvme_io": false, 00:14:15.302 "nvme_io_md": false, 00:14:15.302 "write_zeroes": true, 00:14:15.302 "zcopy": true, 00:14:15.302 "get_zone_info": false, 00:14:15.302 "zone_management": false, 00:14:15.302 "zone_append": false, 00:14:15.302 "compare": false, 00:14:15.302 "compare_and_write": false, 00:14:15.302 "abort": true, 00:14:15.302 "seek_hole": false, 00:14:15.302 "seek_data": false, 00:14:15.302 "copy": true, 00:14:15.302 "nvme_iov_md": false 00:14:15.302 }, 00:14:15.302 "memory_domains": [ 00:14:15.302 { 00:14:15.302 "dma_device_id": "system", 00:14:15.302 "dma_device_type": 1 00:14:15.302 }, 00:14:15.302 { 00:14:15.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.302 "dma_device_type": 2 00:14:15.302 } 00:14:15.302 ], 00:14:15.302 "driver_specific": { 00:14:15.302 "passthru": { 00:14:15.302 "name": "pt1", 00:14:15.302 "base_bdev_name": "malloc1" 00:14:15.302 } 00:14:15.302 } 00:14:15.302 }' 00:14:15.302 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.302 10:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.560 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:15.560 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.560 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.560 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:15.560 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.560 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.560 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:15.560 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.560 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.560 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:15.560 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:15.560 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:15.560 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:15.818 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:15.818 "name": "pt2", 00:14:15.818 "aliases": [ 00:14:15.818 "00000000-0000-0000-0000-000000000002" 00:14:15.818 ], 00:14:15.818 "product_name": "passthru", 00:14:15.818 "block_size": 512, 00:14:15.818 "num_blocks": 65536, 00:14:15.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.818 "assigned_rate_limits": { 00:14:15.818 "rw_ios_per_sec": 0, 00:14:15.818 "rw_mbytes_per_sec": 0, 00:14:15.818 "r_mbytes_per_sec": 0, 00:14:15.818 "w_mbytes_per_sec": 0 00:14:15.818 }, 00:14:15.818 "claimed": true, 00:14:15.818 "claim_type": "exclusive_write", 00:14:15.818 "zoned": false, 00:14:15.818 "supported_io_types": { 00:14:15.818 "read": true, 00:14:15.818 "write": true, 00:14:15.818 "unmap": true, 00:14:15.818 "flush": true, 00:14:15.818 "reset": true, 00:14:15.818 "nvme_admin": false, 00:14:15.818 "nvme_io": false, 00:14:15.818 "nvme_io_md": false, 00:14:15.818 "write_zeroes": true, 00:14:15.818 "zcopy": true, 00:14:15.818 "get_zone_info": false, 00:14:15.818 "zone_management": false, 00:14:15.818 "zone_append": false, 00:14:15.818 "compare": false, 00:14:15.818 "compare_and_write": false, 00:14:15.818 "abort": true, 00:14:15.818 "seek_hole": false, 00:14:15.818 "seek_data": false, 00:14:15.818 "copy": true, 00:14:15.818 "nvme_iov_md": false 00:14:15.818 }, 00:14:15.818 "memory_domains": [ 00:14:15.818 { 00:14:15.818 "dma_device_id": "system", 00:14:15.818 "dma_device_type": 1 00:14:15.818 }, 00:14:15.818 { 00:14:15.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.818 "dma_device_type": 2 00:14:15.819 } 00:14:15.819 ], 00:14:15.819 "driver_specific": { 00:14:15.819 "passthru": { 00:14:15.819 "name": "pt2", 00:14:15.819 "base_bdev_name": "malloc2" 00:14:15.819 } 00:14:15.819 } 00:14:15.819 }' 00:14:15.819 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.077 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.077 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.077 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.077 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.077 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:16.077 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.077 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.077 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.077 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.077 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.334 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.334 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.334 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:16.334 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:16.592 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:16.592 "name": "pt3", 00:14:16.592 "aliases": [ 00:14:16.592 "00000000-0000-0000-0000-000000000003" 00:14:16.592 ], 00:14:16.592 "product_name": "passthru", 00:14:16.592 "block_size": 512, 00:14:16.592 "num_blocks": 65536, 00:14:16.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.592 "assigned_rate_limits": { 00:14:16.592 "rw_ios_per_sec": 0, 00:14:16.592 "rw_mbytes_per_sec": 0, 00:14:16.592 "r_mbytes_per_sec": 0, 00:14:16.592 "w_mbytes_per_sec": 0 00:14:16.592 }, 00:14:16.592 "claimed": true, 00:14:16.592 "claim_type": "exclusive_write", 00:14:16.592 "zoned": false, 00:14:16.592 "supported_io_types": { 00:14:16.592 "read": true, 00:14:16.592 "write": true, 00:14:16.592 "unmap": true, 00:14:16.592 "flush": true, 00:14:16.592 "reset": true, 00:14:16.592 "nvme_admin": false, 00:14:16.592 "nvme_io": false, 00:14:16.592 "nvme_io_md": false, 00:14:16.592 "write_zeroes": true, 00:14:16.592 "zcopy": true, 00:14:16.592 "get_zone_info": false, 00:14:16.592 "zone_management": false, 00:14:16.592 "zone_append": false, 00:14:16.592 "compare": false, 00:14:16.592 "compare_and_write": false, 00:14:16.592 "abort": true, 00:14:16.592 "seek_hole": false, 00:14:16.592 "seek_data": false, 00:14:16.592 "copy": true, 00:14:16.592 "nvme_iov_md": false 00:14:16.592 }, 00:14:16.592 "memory_domains": [ 00:14:16.592 { 00:14:16.592 "dma_device_id": "system", 00:14:16.592 "dma_device_type": 1 00:14:16.592 }, 00:14:16.592 { 00:14:16.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.592 "dma_device_type": 2 00:14:16.592 } 00:14:16.592 ], 00:14:16.592 "driver_specific": { 00:14:16.592 "passthru": { 00:14:16.592 "name": "pt3", 00:14:16.592 "base_bdev_name": "malloc3" 00:14:16.592 } 00:14:16.592 } 00:14:16.592 }' 00:14:16.592 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.592 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.592 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.592 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.592 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.592 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:16.592 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.592 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.592 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.592 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.850 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.850 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.850 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:16.850 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:17.108 [2024-07-25 10:29:20.575764] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.108 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f12923ca-59a7-48a0-a399-0fe55112e3ca 00:14:17.108 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f12923ca-59a7-48a0-a399-0fe55112e3ca ']' 00:14:17.108 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:17.366 [2024-07-25 10:29:20.840205] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.366 [2024-07-25 10:29:20.840233] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.367 [2024-07-25 10:29:20.840317] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.367 [2024-07-25 10:29:20.840401] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.367 [2024-07-25 10:29:20.840415] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2701120 name raid_bdev1, state offline 00:14:17.367 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.367 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:17.625 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:17.625 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:17.625 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:17.625 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:17.883 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:17.883 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:18.142 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:18.142 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:18.142 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:18.142 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:18.401 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:18.401 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:18.401 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:18.401 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:18.401 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:18.401 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.401 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:18.401 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.401 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:18.401 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.401 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:18.401 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:18.401 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:18.658 [2024-07-25 10:29:22.336185] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:18.658 [2024-07-25 10:29:22.337580] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:18.658 [2024-07-25 10:29:22.337624] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:18.658 [2024-07-25 10:29:22.337686] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:18.658 [2024-07-25 10:29:22.337754] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:18.659 [2024-07-25 10:29:22.337783] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:18.659 [2024-07-25 10:29:22.337804] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:18.659 [2024-07-25 10:29:22.337815] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x270c170 name raid_bdev1, state configuring 00:14:18.659 request: 00:14:18.659 { 00:14:18.659 "name": "raid_bdev1", 00:14:18.659 "raid_level": "concat", 00:14:18.659 "base_bdevs": [ 00:14:18.659 "malloc1", 00:14:18.659 "malloc2", 00:14:18.659 "malloc3" 00:14:18.659 ], 00:14:18.659 "strip_size_kb": 64, 00:14:18.659 "superblock": false, 00:14:18.659 "method": "bdev_raid_create", 00:14:18.659 "req_id": 1 00:14:18.659 } 00:14:18.659 Got JSON-RPC error response 00:14:18.659 response: 00:14:18.659 { 00:14:18.659 "code": -17, 00:14:18.659 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:18.659 } 00:14:18.659 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:18.659 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:18.659 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:18.659 10:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:18.659 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.659 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:18.917 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:18.917 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:18.917 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:19.176 [2024-07-25 10:29:22.849476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:19.176 [2024-07-25 10:29:22.849549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.176 [2024-07-25 10:29:22.849572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2700df0 00:14:19.176 [2024-07-25 10:29:22.849586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.176 [2024-07-25 10:29:22.851203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.176 [2024-07-25 10:29:22.851227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:19.176 [2024-07-25 10:29:22.851332] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:19.176 [2024-07-25 10:29:22.851373] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:19.176 pt1 00:14:19.176 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:14:19.176 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:19.176 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:19.176 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:19.176 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:19.176 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:19.176 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.176 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.176 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.176 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.176 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.176 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.743 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.743 "name": "raid_bdev1", 00:14:19.743 "uuid": "f12923ca-59a7-48a0-a399-0fe55112e3ca", 00:14:19.743 "strip_size_kb": 64, 00:14:19.743 "state": "configuring", 00:14:19.743 "raid_level": "concat", 00:14:19.743 "superblock": true, 00:14:19.743 "num_base_bdevs": 3, 00:14:19.743 "num_base_bdevs_discovered": 1, 00:14:19.743 "num_base_bdevs_operational": 3, 00:14:19.743 "base_bdevs_list": [ 00:14:19.743 { 00:14:19.743 "name": "pt1", 00:14:19.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:19.743 "is_configured": true, 00:14:19.743 "data_offset": 2048, 00:14:19.743 "data_size": 63488 00:14:19.743 }, 00:14:19.743 { 00:14:19.743 "name": null, 00:14:19.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:19.743 "is_configured": false, 00:14:19.743 "data_offset": 2048, 00:14:19.743 "data_size": 63488 00:14:19.743 }, 00:14:19.743 { 00:14:19.743 "name": null, 00:14:19.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:19.743 "is_configured": false, 00:14:19.743 "data_offset": 2048, 00:14:19.743 "data_size": 63488 00:14:19.743 } 00:14:19.743 ] 00:14:19.743 }' 00:14:19.743 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.743 10:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.000 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:14:20.000 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:20.258 [2024-07-25 10:29:23.908336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:20.258 [2024-07-25 10:29:23.908407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.258 [2024-07-25 10:29:23.908432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x270c7e0 00:14:20.258 [2024-07-25 10:29:23.908446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.259 [2024-07-25 10:29:23.908804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.259 [2024-07-25 10:29:23.908824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:20.259 [2024-07-25 10:29:23.908897] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:20.259 [2024-07-25 10:29:23.908921] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:20.259 pt2 00:14:20.259 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:20.517 [2024-07-25 10:29:24.197097] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:20.517 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:14:20.517 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:20.517 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:20.517 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:20.517 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:20.517 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:20.517 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:20.517 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:20.517 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:20.517 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:20.517 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.517 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.775 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:20.775 "name": "raid_bdev1", 00:14:20.775 "uuid": "f12923ca-59a7-48a0-a399-0fe55112e3ca", 00:14:20.775 "strip_size_kb": 64, 00:14:20.775 "state": "configuring", 00:14:20.775 "raid_level": "concat", 00:14:20.775 "superblock": true, 00:14:20.775 "num_base_bdevs": 3, 00:14:20.775 "num_base_bdevs_discovered": 1, 00:14:20.775 "num_base_bdevs_operational": 3, 00:14:20.775 "base_bdevs_list": [ 00:14:20.775 { 00:14:20.775 "name": "pt1", 00:14:20.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:20.775 "is_configured": true, 00:14:20.775 "data_offset": 2048, 00:14:20.775 "data_size": 63488 00:14:20.775 }, 00:14:20.775 { 00:14:20.775 "name": null, 00:14:20.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:20.775 "is_configured": false, 00:14:20.775 "data_offset": 2048, 00:14:20.775 "data_size": 63488 00:14:20.775 }, 00:14:20.775 { 00:14:20.775 "name": null, 00:14:20.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:20.776 "is_configured": false, 00:14:20.776 "data_offset": 2048, 00:14:20.776 "data_size": 63488 00:14:20.776 } 00:14:20.776 ] 00:14:20.776 }' 00:14:20.776 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:20.776 10:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.342 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:21.342 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:21.342 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:21.602 [2024-07-25 10:29:25.251902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:21.602 [2024-07-25 10:29:25.251966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.602 [2024-07-25 10:29:25.251988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27035b0 00:14:21.602 [2024-07-25 10:29:25.252001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.602 [2024-07-25 10:29:25.252388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.602 [2024-07-25 10:29:25.252423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:21.602 [2024-07-25 10:29:25.252511] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:21.602 [2024-07-25 10:29:25.252534] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:21.602 pt2 00:14:21.602 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:21.602 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:21.602 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:21.907 [2024-07-25 10:29:25.492536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:21.907 [2024-07-25 10:29:25.492576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.907 [2024-07-25 10:29:25.492603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25629b0 00:14:21.907 [2024-07-25 10:29:25.492619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.907 [2024-07-25 10:29:25.492909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.907 [2024-07-25 10:29:25.492936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:21.907 [2024-07-25 10:29:25.492993] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:21.907 [2024-07-25 10:29:25.493018] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:21.907 [2024-07-25 10:29:25.493147] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2560f00 00:14:21.907 [2024-07-25 10:29:25.493164] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:21.907 [2024-07-25 10:29:25.493332] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25627b0 00:14:21.907 [2024-07-25 10:29:25.493480] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2560f00 00:14:21.907 [2024-07-25 10:29:25.493496] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2560f00 00:14:21.907 [2024-07-25 10:29:25.493604] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.907 pt3 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.907 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.165 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.165 "name": "raid_bdev1", 00:14:22.166 "uuid": "f12923ca-59a7-48a0-a399-0fe55112e3ca", 00:14:22.166 "strip_size_kb": 64, 00:14:22.166 "state": "online", 00:14:22.166 "raid_level": "concat", 00:14:22.166 "superblock": true, 00:14:22.166 "num_base_bdevs": 3, 00:14:22.166 "num_base_bdevs_discovered": 3, 00:14:22.166 "num_base_bdevs_operational": 3, 00:14:22.166 "base_bdevs_list": [ 00:14:22.166 { 00:14:22.166 "name": "pt1", 00:14:22.166 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:22.166 "is_configured": true, 00:14:22.166 "data_offset": 2048, 00:14:22.166 "data_size": 63488 00:14:22.166 }, 00:14:22.166 { 00:14:22.166 "name": "pt2", 00:14:22.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.166 "is_configured": true, 00:14:22.166 "data_offset": 2048, 00:14:22.166 "data_size": 63488 00:14:22.166 }, 00:14:22.166 { 00:14:22.166 "name": "pt3", 00:14:22.166 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:22.166 "is_configured": true, 00:14:22.166 "data_offset": 2048, 00:14:22.166 "data_size": 63488 00:14:22.166 } 00:14:22.166 ] 00:14:22.166 }' 00:14:22.166 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.166 10:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.731 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:22.731 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:22.732 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:22.732 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:22.732 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:22.732 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:22.732 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:22.732 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:22.990 [2024-07-25 10:29:26.539579] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.990 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:22.990 "name": "raid_bdev1", 00:14:22.990 "aliases": [ 00:14:22.990 "f12923ca-59a7-48a0-a399-0fe55112e3ca" 00:14:22.990 ], 00:14:22.990 "product_name": "Raid Volume", 00:14:22.990 "block_size": 512, 00:14:22.990 "num_blocks": 190464, 00:14:22.990 "uuid": "f12923ca-59a7-48a0-a399-0fe55112e3ca", 00:14:22.990 "assigned_rate_limits": { 00:14:22.990 "rw_ios_per_sec": 0, 00:14:22.990 "rw_mbytes_per_sec": 0, 00:14:22.990 "r_mbytes_per_sec": 0, 00:14:22.990 "w_mbytes_per_sec": 0 00:14:22.990 }, 00:14:22.990 "claimed": false, 00:14:22.990 "zoned": false, 00:14:22.990 "supported_io_types": { 00:14:22.990 "read": true, 00:14:22.990 "write": true, 00:14:22.990 "unmap": true, 00:14:22.990 "flush": true, 00:14:22.990 "reset": true, 00:14:22.990 "nvme_admin": false, 00:14:22.990 "nvme_io": false, 00:14:22.990 "nvme_io_md": false, 00:14:22.990 "write_zeroes": true, 00:14:22.990 "zcopy": false, 00:14:22.990 "get_zone_info": false, 00:14:22.990 "zone_management": false, 00:14:22.990 "zone_append": false, 00:14:22.990 "compare": false, 00:14:22.990 "compare_and_write": false, 00:14:22.990 "abort": false, 00:14:22.990 "seek_hole": false, 00:14:22.990 "seek_data": false, 00:14:22.990 "copy": false, 00:14:22.990 "nvme_iov_md": false 00:14:22.990 }, 00:14:22.990 "memory_domains": [ 00:14:22.990 { 00:14:22.990 "dma_device_id": "system", 00:14:22.990 "dma_device_type": 1 00:14:22.990 }, 00:14:22.990 { 00:14:22.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.990 "dma_device_type": 2 00:14:22.990 }, 00:14:22.990 { 00:14:22.990 "dma_device_id": "system", 00:14:22.990 "dma_device_type": 1 00:14:22.990 }, 00:14:22.990 { 00:14:22.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.990 "dma_device_type": 2 00:14:22.990 }, 00:14:22.990 { 00:14:22.990 "dma_device_id": "system", 00:14:22.990 "dma_device_type": 1 00:14:22.990 }, 00:14:22.990 { 00:14:22.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.990 "dma_device_type": 2 00:14:22.990 } 00:14:22.990 ], 00:14:22.990 "driver_specific": { 00:14:22.990 "raid": { 00:14:22.990 "uuid": "f12923ca-59a7-48a0-a399-0fe55112e3ca", 00:14:22.990 "strip_size_kb": 64, 00:14:22.990 "state": "online", 00:14:22.990 "raid_level": "concat", 00:14:22.990 "superblock": true, 00:14:22.990 "num_base_bdevs": 3, 00:14:22.990 "num_base_bdevs_discovered": 3, 00:14:22.990 "num_base_bdevs_operational": 3, 00:14:22.990 "base_bdevs_list": [ 00:14:22.990 { 00:14:22.990 "name": "pt1", 00:14:22.990 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:22.990 "is_configured": true, 00:14:22.990 "data_offset": 2048, 00:14:22.990 "data_size": 63488 00:14:22.990 }, 00:14:22.990 { 00:14:22.990 "name": "pt2", 00:14:22.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.990 "is_configured": true, 00:14:22.990 "data_offset": 2048, 00:14:22.990 "data_size": 63488 00:14:22.990 }, 00:14:22.990 { 00:14:22.990 "name": "pt3", 00:14:22.990 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:22.990 "is_configured": true, 00:14:22.990 "data_offset": 2048, 00:14:22.990 "data_size": 63488 00:14:22.990 } 00:14:22.990 ] 00:14:22.990 } 00:14:22.990 } 00:14:22.990 }' 00:14:22.990 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:22.990 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:22.990 pt2 00:14:22.990 pt3' 00:14:22.990 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:22.990 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:22.990 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.248 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.248 "name": "pt1", 00:14:23.248 "aliases": [ 00:14:23.248 "00000000-0000-0000-0000-000000000001" 00:14:23.248 ], 00:14:23.248 "product_name": "passthru", 00:14:23.248 "block_size": 512, 00:14:23.248 "num_blocks": 65536, 00:14:23.248 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.248 "assigned_rate_limits": { 00:14:23.248 "rw_ios_per_sec": 0, 00:14:23.248 "rw_mbytes_per_sec": 0, 00:14:23.248 "r_mbytes_per_sec": 0, 00:14:23.248 "w_mbytes_per_sec": 0 00:14:23.248 }, 00:14:23.248 "claimed": true, 00:14:23.248 "claim_type": "exclusive_write", 00:14:23.248 "zoned": false, 00:14:23.248 "supported_io_types": { 00:14:23.248 "read": true, 00:14:23.248 "write": true, 00:14:23.248 "unmap": true, 00:14:23.248 "flush": true, 00:14:23.248 "reset": true, 00:14:23.248 "nvme_admin": false, 00:14:23.248 "nvme_io": false, 00:14:23.248 "nvme_io_md": false, 00:14:23.248 "write_zeroes": true, 00:14:23.248 "zcopy": true, 00:14:23.248 "get_zone_info": false, 00:14:23.248 "zone_management": false, 00:14:23.248 "zone_append": false, 00:14:23.248 "compare": false, 00:14:23.248 "compare_and_write": false, 00:14:23.248 "abort": true, 00:14:23.248 "seek_hole": false, 00:14:23.248 "seek_data": false, 00:14:23.248 "copy": true, 00:14:23.248 "nvme_iov_md": false 00:14:23.248 }, 00:14:23.248 "memory_domains": [ 00:14:23.248 { 00:14:23.248 "dma_device_id": "system", 00:14:23.248 "dma_device_type": 1 00:14:23.248 }, 00:14:23.248 { 00:14:23.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.248 "dma_device_type": 2 00:14:23.248 } 00:14:23.248 ], 00:14:23.248 "driver_specific": { 00:14:23.248 "passthru": { 00:14:23.248 "name": "pt1", 00:14:23.248 "base_bdev_name": "malloc1" 00:14:23.248 } 00:14:23.248 } 00:14:23.248 }' 00:14:23.248 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.248 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.248 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.248 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.506 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.506 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:23.506 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.506 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.506 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.506 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.506 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.506 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.506 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.506 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:23.506 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.764 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.764 "name": "pt2", 00:14:23.764 "aliases": [ 00:14:23.764 "00000000-0000-0000-0000-000000000002" 00:14:23.764 ], 00:14:23.764 "product_name": "passthru", 00:14:23.764 "block_size": 512, 00:14:23.764 "num_blocks": 65536, 00:14:23.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:23.764 "assigned_rate_limits": { 00:14:23.764 "rw_ios_per_sec": 0, 00:14:23.764 "rw_mbytes_per_sec": 0, 00:14:23.764 "r_mbytes_per_sec": 0, 00:14:23.764 "w_mbytes_per_sec": 0 00:14:23.764 }, 00:14:23.764 "claimed": true, 00:14:23.764 "claim_type": "exclusive_write", 00:14:23.764 "zoned": false, 00:14:23.764 "supported_io_types": { 00:14:23.764 "read": true, 00:14:23.764 "write": true, 00:14:23.764 "unmap": true, 00:14:23.764 "flush": true, 00:14:23.764 "reset": true, 00:14:23.764 "nvme_admin": false, 00:14:23.764 "nvme_io": false, 00:14:23.764 "nvme_io_md": false, 00:14:23.764 "write_zeroes": true, 00:14:23.764 "zcopy": true, 00:14:23.764 "get_zone_info": false, 00:14:23.764 "zone_management": false, 00:14:23.764 "zone_append": false, 00:14:23.764 "compare": false, 00:14:23.764 "compare_and_write": false, 00:14:23.764 "abort": true, 00:14:23.764 "seek_hole": false, 00:14:23.764 "seek_data": false, 00:14:23.764 "copy": true, 00:14:23.764 "nvme_iov_md": false 00:14:23.764 }, 00:14:23.764 "memory_domains": [ 00:14:23.764 { 00:14:23.764 "dma_device_id": "system", 00:14:23.764 "dma_device_type": 1 00:14:23.764 }, 00:14:23.764 { 00:14:23.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.764 "dma_device_type": 2 00:14:23.764 } 00:14:23.764 ], 00:14:23.764 "driver_specific": { 00:14:23.764 "passthru": { 00:14:23.764 "name": "pt2", 00:14:23.764 "base_bdev_name": "malloc2" 00:14:23.764 } 00:14:23.764 } 00:14:23.764 }' 00:14:23.764 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.764 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.022 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.022 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.022 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.022 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.022 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.022 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.022 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.022 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.022 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.280 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.280 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:24.280 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:24.280 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.537 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.537 "name": "pt3", 00:14:24.537 "aliases": [ 00:14:24.537 "00000000-0000-0000-0000-000000000003" 00:14:24.537 ], 00:14:24.537 "product_name": "passthru", 00:14:24.537 "block_size": 512, 00:14:24.537 "num_blocks": 65536, 00:14:24.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.537 "assigned_rate_limits": { 00:14:24.537 "rw_ios_per_sec": 0, 00:14:24.537 "rw_mbytes_per_sec": 0, 00:14:24.537 "r_mbytes_per_sec": 0, 00:14:24.537 "w_mbytes_per_sec": 0 00:14:24.537 }, 00:14:24.537 "claimed": true, 00:14:24.537 "claim_type": "exclusive_write", 00:14:24.537 "zoned": false, 00:14:24.537 "supported_io_types": { 00:14:24.537 "read": true, 00:14:24.537 "write": true, 00:14:24.537 "unmap": true, 00:14:24.537 "flush": true, 00:14:24.537 "reset": true, 00:14:24.537 "nvme_admin": false, 00:14:24.537 "nvme_io": false, 00:14:24.537 "nvme_io_md": false, 00:14:24.537 "write_zeroes": true, 00:14:24.537 "zcopy": true, 00:14:24.537 "get_zone_info": false, 00:14:24.537 "zone_management": false, 00:14:24.537 "zone_append": false, 00:14:24.537 "compare": false, 00:14:24.537 "compare_and_write": false, 00:14:24.537 "abort": true, 00:14:24.537 "seek_hole": false, 00:14:24.537 "seek_data": false, 00:14:24.537 "copy": true, 00:14:24.537 "nvme_iov_md": false 00:14:24.537 }, 00:14:24.537 "memory_domains": [ 00:14:24.537 { 00:14:24.537 "dma_device_id": "system", 00:14:24.537 "dma_device_type": 1 00:14:24.537 }, 00:14:24.537 { 00:14:24.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.537 "dma_device_type": 2 00:14:24.537 } 00:14:24.537 ], 00:14:24.537 "driver_specific": { 00:14:24.537 "passthru": { 00:14:24.537 "name": "pt3", 00:14:24.537 "base_bdev_name": "malloc3" 00:14:24.537 } 00:14:24.537 } 00:14:24.537 }' 00:14:24.537 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.537 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.537 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.537 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.537 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.537 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.537 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.537 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.537 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.537 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.794 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.794 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.794 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:24.794 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:25.051 [2024-07-25 10:29:28.520858] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f12923ca-59a7-48a0-a399-0fe55112e3ca '!=' f12923ca-59a7-48a0-a399-0fe55112e3ca ']' 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2371352 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2371352 ']' 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2371352 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2371352 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2371352' 00:14:25.051 killing process with pid 2371352 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2371352 00:14:25.051 [2024-07-25 10:29:28.570397] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.051 10:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2371352 00:14:25.051 [2024-07-25 10:29:28.570486] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.051 [2024-07-25 10:29:28.570557] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.051 [2024-07-25 10:29:28.570573] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2560f00 name raid_bdev1, state offline 00:14:25.052 [2024-07-25 10:29:28.608268] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.311 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:25.311 00:14:25.311 real 0m14.287s 00:14:25.311 user 0m26.143s 00:14:25.311 sys 0m1.960s 00:14:25.311 10:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:25.311 10:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.311 ************************************ 00:14:25.311 END TEST raid_superblock_test 00:14:25.311 ************************************ 00:14:25.311 10:29:28 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:14:25.311 10:29:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:25.311 10:29:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:25.311 10:29:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.311 ************************************ 00:14:25.311 START TEST raid_read_error_test 00:14:25.311 ************************************ 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ymMvbQ9No1 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2373247 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2373247 /var/tmp/spdk-raid.sock 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2373247 ']' 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:25.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.311 10:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.311 [2024-07-25 10:29:29.014255] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:14:25.311 [2024-07-25 10:29:29.014342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373247 ] 00:14:25.570 [2024-07-25 10:29:29.097050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.570 [2024-07-25 10:29:29.222500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.828 [2024-07-25 10:29:29.300364] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.828 [2024-07-25 10:29:29.300409] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.392 10:29:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.392 10:29:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:26.392 10:29:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:26.392 10:29:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:26.650 BaseBdev1_malloc 00:14:26.650 10:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:26.907 true 00:14:26.907 10:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:27.165 [2024-07-25 10:29:30.863685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:27.165 [2024-07-25 10:29:30.863743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.165 [2024-07-25 10:29:30.863774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b72250 00:14:27.165 [2024-07-25 10:29:30.863790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.165 [2024-07-25 10:29:30.865440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.165 [2024-07-25 10:29:30.865468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:27.165 BaseBdev1 00:14:27.422 10:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:27.422 10:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:27.422 BaseBdev2_malloc 00:14:27.680 10:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:27.937 true 00:14:27.937 10:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:28.194 [2024-07-25 10:29:31.658062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:28.194 [2024-07-25 10:29:31.658130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.194 [2024-07-25 10:29:31.658157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b61650 00:14:28.194 [2024-07-25 10:29:31.658172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.194 [2024-07-25 10:29:31.659773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.194 [2024-07-25 10:29:31.659801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:28.194 BaseBdev2 00:14:28.194 10:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:28.194 10:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:28.451 BaseBdev3_malloc 00:14:28.451 10:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:28.708 true 00:14:28.708 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:28.965 [2024-07-25 10:29:32.499598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:28.965 [2024-07-25 10:29:32.499651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.965 [2024-07-25 10:29:32.499675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b575d0 00:14:28.965 [2024-07-25 10:29:32.499690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.965 [2024-07-25 10:29:32.501259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.965 [2024-07-25 10:29:32.501288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:28.965 BaseBdev3 00:14:28.965 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:29.222 [2024-07-25 10:29:32.752342] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.222 [2024-07-25 10:29:32.753896] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.222 [2024-07-25 10:29:32.753978] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.222 [2024-07-25 10:29:32.754262] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x19b66b0 00:14:29.222 [2024-07-25 10:29:32.754280] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:29.222 [2024-07-25 10:29:32.754530] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19b6b00 00:14:29.222 [2024-07-25 10:29:32.754737] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19b66b0 00:14:29.222 [2024-07-25 10:29:32.754754] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x19b66b0 00:14:29.222 [2024-07-25 10:29:32.754914] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.222 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:29.222 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:29.222 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:29.222 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:29.222 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:29.222 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:29.222 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:29.222 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:29.222 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:29.222 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:29.222 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.222 10:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.488 10:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:29.488 "name": "raid_bdev1", 00:14:29.488 "uuid": "f9717c8b-50aa-4312-9a8e-2577580494db", 00:14:29.488 "strip_size_kb": 64, 00:14:29.488 "state": "online", 00:14:29.488 "raid_level": "concat", 00:14:29.488 "superblock": true, 00:14:29.488 "num_base_bdevs": 3, 00:14:29.488 "num_base_bdevs_discovered": 3, 00:14:29.488 "num_base_bdevs_operational": 3, 00:14:29.488 "base_bdevs_list": [ 00:14:29.488 { 00:14:29.488 "name": "BaseBdev1", 00:14:29.488 "uuid": "d94a65a0-6861-5a40-8bb0-03a6829a0f55", 00:14:29.488 "is_configured": true, 00:14:29.488 "data_offset": 2048, 00:14:29.488 "data_size": 63488 00:14:29.488 }, 00:14:29.488 { 00:14:29.488 "name": "BaseBdev2", 00:14:29.488 "uuid": "3cd44cfc-a723-5720-8e10-fcbd3a1c87e2", 00:14:29.488 "is_configured": true, 00:14:29.488 "data_offset": 2048, 00:14:29.488 "data_size": 63488 00:14:29.488 }, 00:14:29.488 { 00:14:29.488 "name": "BaseBdev3", 00:14:29.488 "uuid": "8bddaff9-2bc9-5ae0-abe8-9fa3386233cc", 00:14:29.488 "is_configured": true, 00:14:29.488 "data_offset": 2048, 00:14:29.488 "data_size": 63488 00:14:29.488 } 00:14:29.488 ] 00:14:29.488 }' 00:14:29.488 10:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:29.488 10:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.053 10:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:30.053 10:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:30.053 [2024-07-25 10:29:33.675183] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19bfd70 00:14:30.986 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.243 10:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.501 10:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:31.501 "name": "raid_bdev1", 00:14:31.501 "uuid": "f9717c8b-50aa-4312-9a8e-2577580494db", 00:14:31.501 "strip_size_kb": 64, 00:14:31.501 "state": "online", 00:14:31.501 "raid_level": "concat", 00:14:31.501 "superblock": true, 00:14:31.501 "num_base_bdevs": 3, 00:14:31.501 "num_base_bdevs_discovered": 3, 00:14:31.501 "num_base_bdevs_operational": 3, 00:14:31.501 "base_bdevs_list": [ 00:14:31.501 { 00:14:31.501 "name": "BaseBdev1", 00:14:31.501 "uuid": "d94a65a0-6861-5a40-8bb0-03a6829a0f55", 00:14:31.501 "is_configured": true, 00:14:31.501 "data_offset": 2048, 00:14:31.501 "data_size": 63488 00:14:31.501 }, 00:14:31.501 { 00:14:31.501 "name": "BaseBdev2", 00:14:31.501 "uuid": "3cd44cfc-a723-5720-8e10-fcbd3a1c87e2", 00:14:31.501 "is_configured": true, 00:14:31.501 "data_offset": 2048, 00:14:31.501 "data_size": 63488 00:14:31.501 }, 00:14:31.501 { 00:14:31.501 "name": "BaseBdev3", 00:14:31.501 "uuid": "8bddaff9-2bc9-5ae0-abe8-9fa3386233cc", 00:14:31.501 "is_configured": true, 00:14:31.501 "data_offset": 2048, 00:14:31.501 "data_size": 63488 00:14:31.501 } 00:14:31.501 ] 00:14:31.501 }' 00:14:31.501 10:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:31.501 10:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.066 10:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:32.325 [2024-07-25 10:29:35.947732] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.325 [2024-07-25 10:29:35.947776] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.325 [2024-07-25 10:29:35.950792] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.325 [2024-07-25 10:29:35.950833] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.325 [2024-07-25 10:29:35.950883] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.325 [2024-07-25 10:29:35.950897] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19b66b0 name raid_bdev1, state offline 00:14:32.325 0 00:14:32.325 10:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2373247 00:14:32.325 10:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2373247 ']' 00:14:32.325 10:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2373247 00:14:32.325 10:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:14:32.325 10:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.325 10:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2373247 00:14:32.325 10:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:32.325 10:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:32.325 10:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2373247' 00:14:32.325 killing process with pid 2373247 00:14:32.325 10:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2373247 00:14:32.325 [2024-07-25 10:29:35.994794] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.325 10:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2373247 00:14:32.325 [2024-07-25 10:29:36.024160] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.891 10:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ymMvbQ9No1 00:14:32.891 10:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:32.891 10:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:32.891 10:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:14:32.891 10:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:14:32.891 10:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:32.891 10:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:32.891 10:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:14:32.891 00:14:32.891 real 0m7.376s 00:14:32.891 user 0m11.933s 00:14:32.891 sys 0m1.020s 00:14:32.891 10:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.891 10:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.891 ************************************ 00:14:32.891 END TEST raid_read_error_test 00:14:32.891 ************************************ 00:14:32.891 10:29:36 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:14:32.891 10:29:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:32.891 10:29:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.891 10:29:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.891 ************************************ 00:14:32.891 START TEST raid_write_error_test 00:14:32.891 ************************************ 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.PNnd6bRdMl 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2374276 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2374276 /var/tmp/spdk-raid.sock 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2374276 ']' 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:32.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.891 10:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.891 [2024-07-25 10:29:36.437092] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:14:32.891 [2024-07-25 10:29:36.437188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374276 ] 00:14:32.891 [2024-07-25 10:29:36.512404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.149 [2024-07-25 10:29:36.625912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.149 [2024-07-25 10:29:36.698441] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.149 [2024-07-25 10:29:36.698486] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.714 10:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:33.714 10:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:33.714 10:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:33.714 10:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:33.972 BaseBdev1_malloc 00:14:34.230 10:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:34.488 true 00:14:34.488 10:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:34.746 [2024-07-25 10:29:38.214636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:34.746 [2024-07-25 10:29:38.214698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.746 [2024-07-25 10:29:38.214726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c01250 00:14:34.746 [2024-07-25 10:29:38.214742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.746 [2024-07-25 10:29:38.216688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.746 [2024-07-25 10:29:38.216718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:34.746 BaseBdev1 00:14:34.746 10:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:34.746 10:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:35.004 BaseBdev2_malloc 00:14:35.004 10:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:35.261 true 00:14:35.261 10:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:35.518 [2024-07-25 10:29:39.004467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:35.518 [2024-07-25 10:29:39.004533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.518 [2024-07-25 10:29:39.004559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bf0650 00:14:35.518 [2024-07-25 10:29:39.004572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.518 [2024-07-25 10:29:39.006175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.518 [2024-07-25 10:29:39.006198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:35.518 BaseBdev2 00:14:35.518 10:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:35.518 10:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:35.775 BaseBdev3_malloc 00:14:35.775 10:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:36.032 true 00:14:36.032 10:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:36.288 [2024-07-25 10:29:39.825852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:36.288 [2024-07-25 10:29:39.825923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.288 [2024-07-25 10:29:39.825949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be65d0 00:14:36.288 [2024-07-25 10:29:39.825962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.288 [2024-07-25 10:29:39.827808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.288 [2024-07-25 10:29:39.827836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:36.288 BaseBdev3 00:14:36.288 10:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:36.544 [2024-07-25 10:29:40.134731] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.544 [2024-07-25 10:29:40.136220] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.544 [2024-07-25 10:29:40.136300] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.544 [2024-07-25 10:29:40.136551] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a456b0 00:14:36.544 [2024-07-25 10:29:40.136569] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:36.544 [2024-07-25 10:29:40.136799] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a45b00 00:14:36.544 [2024-07-25 10:29:40.136994] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a456b0 00:14:36.544 [2024-07-25 10:29:40.137011] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a456b0 00:14:36.544 [2024-07-25 10:29:40.137171] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.544 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:36.544 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:36.544 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:36.544 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:36.544 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:36.544 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:36.544 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.544 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.544 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.544 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.544 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.544 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.801 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.801 "name": "raid_bdev1", 00:14:36.801 "uuid": "48ffe710-9355-421a-8ff2-08f9491b701d", 00:14:36.801 "strip_size_kb": 64, 00:14:36.801 "state": "online", 00:14:36.801 "raid_level": "concat", 00:14:36.801 "superblock": true, 00:14:36.801 "num_base_bdevs": 3, 00:14:36.801 "num_base_bdevs_discovered": 3, 00:14:36.801 "num_base_bdevs_operational": 3, 00:14:36.801 "base_bdevs_list": [ 00:14:36.801 { 00:14:36.801 "name": "BaseBdev1", 00:14:36.801 "uuid": "8f602740-1e9e-54b7-91ea-aa48ef61ec54", 00:14:36.801 "is_configured": true, 00:14:36.801 "data_offset": 2048, 00:14:36.801 "data_size": 63488 00:14:36.801 }, 00:14:36.801 { 00:14:36.801 "name": "BaseBdev2", 00:14:36.801 "uuid": "193a0cf6-5717-5a9b-832f-60fffa844412", 00:14:36.801 "is_configured": true, 00:14:36.801 "data_offset": 2048, 00:14:36.801 "data_size": 63488 00:14:36.801 }, 00:14:36.801 { 00:14:36.801 "name": "BaseBdev3", 00:14:36.801 "uuid": "9a075c88-b45e-5d90-b0a6-adf9399c6d80", 00:14:36.801 "is_configured": true, 00:14:36.801 "data_offset": 2048, 00:14:36.801 "data_size": 63488 00:14:36.801 } 00:14:36.801 ] 00:14:36.801 }' 00:14:36.801 10:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.801 10:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.365 10:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:37.365 10:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:37.622 [2024-07-25 10:29:41.097614] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a4ed70 00:14:38.577 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.835 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.093 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:39.093 "name": "raid_bdev1", 00:14:39.093 "uuid": "48ffe710-9355-421a-8ff2-08f9491b701d", 00:14:39.093 "strip_size_kb": 64, 00:14:39.093 "state": "online", 00:14:39.093 "raid_level": "concat", 00:14:39.093 "superblock": true, 00:14:39.093 "num_base_bdevs": 3, 00:14:39.093 "num_base_bdevs_discovered": 3, 00:14:39.093 "num_base_bdevs_operational": 3, 00:14:39.093 "base_bdevs_list": [ 00:14:39.093 { 00:14:39.093 "name": "BaseBdev1", 00:14:39.093 "uuid": "8f602740-1e9e-54b7-91ea-aa48ef61ec54", 00:14:39.093 "is_configured": true, 00:14:39.093 "data_offset": 2048, 00:14:39.093 "data_size": 63488 00:14:39.093 }, 00:14:39.093 { 00:14:39.093 "name": "BaseBdev2", 00:14:39.093 "uuid": "193a0cf6-5717-5a9b-832f-60fffa844412", 00:14:39.093 "is_configured": true, 00:14:39.093 "data_offset": 2048, 00:14:39.093 "data_size": 63488 00:14:39.093 }, 00:14:39.093 { 00:14:39.093 "name": "BaseBdev3", 00:14:39.093 "uuid": "9a075c88-b45e-5d90-b0a6-adf9399c6d80", 00:14:39.093 "is_configured": true, 00:14:39.093 "data_offset": 2048, 00:14:39.093 "data_size": 63488 00:14:39.093 } 00:14:39.093 ] 00:14:39.093 }' 00:14:39.093 10:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:39.093 10:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.659 10:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:39.915 [2024-07-25 10:29:43.382398] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.915 [2024-07-25 10:29:43.382445] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.915 [2024-07-25 10:29:43.384932] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.915 [2024-07-25 10:29:43.384965] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.915 [2024-07-25 10:29:43.384997] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.915 [2024-07-25 10:29:43.385008] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a456b0 name raid_bdev1, state offline 00:14:39.915 0 00:14:39.915 10:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2374276 00:14:39.915 10:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2374276 ']' 00:14:39.915 10:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2374276 00:14:39.915 10:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:39.915 10:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.915 10:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2374276 00:14:39.915 10:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.915 10:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.915 10:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2374276' 00:14:39.915 killing process with pid 2374276 00:14:39.915 10:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2374276 00:14:39.915 [2024-07-25 10:29:43.433663] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.915 10:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2374276 00:14:39.915 [2024-07-25 10:29:43.462121] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.173 10:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.PNnd6bRdMl 00:14:40.173 10:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:40.173 10:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:40.173 10:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:14:40.173 10:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:14:40.173 10:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:40.173 10:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:40.173 10:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:14:40.173 00:14:40.173 real 0m7.377s 00:14:40.173 user 0m11.962s 00:14:40.173 sys 0m1.000s 00:14:40.173 10:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.173 10:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.173 ************************************ 00:14:40.173 END TEST raid_write_error_test 00:14:40.173 ************************************ 00:14:40.173 10:29:43 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:40.173 10:29:43 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:14:40.173 10:29:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:40.173 10:29:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.173 10:29:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:40.173 ************************************ 00:14:40.173 START TEST raid_state_function_test 00:14:40.173 ************************************ 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2375176 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2375176' 00:14:40.173 Process raid pid: 2375176 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2375176 /var/tmp/spdk-raid.sock 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2375176 ']' 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:40.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:40.173 10:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.173 [2024-07-25 10:29:43.860572] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:14:40.173 [2024-07-25 10:29:43.860656] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.431 [2024-07-25 10:29:43.940429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.431 [2024-07-25 10:29:44.053108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.431 [2024-07-25 10:29:44.123970] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.431 [2024-07-25 10:29:44.124012] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.364 10:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.364 10:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:41.364 10:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:41.364 [2024-07-25 10:29:45.046826] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.364 [2024-07-25 10:29:45.046867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.364 [2024-07-25 10:29:45.046893] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.364 [2024-07-25 10:29:45.046904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.364 [2024-07-25 10:29:45.046912] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:41.364 [2024-07-25 10:29:45.046921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:41.364 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:41.364 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:41.364 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:41.364 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:41.364 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:41.364 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:41.364 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.364 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.364 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.364 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.364 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.364 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.621 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.621 "name": "Existed_Raid", 00:14:41.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.621 "strip_size_kb": 0, 00:14:41.621 "state": "configuring", 00:14:41.621 "raid_level": "raid1", 00:14:41.621 "superblock": false, 00:14:41.621 "num_base_bdevs": 3, 00:14:41.621 "num_base_bdevs_discovered": 0, 00:14:41.621 "num_base_bdevs_operational": 3, 00:14:41.621 "base_bdevs_list": [ 00:14:41.621 { 00:14:41.621 "name": "BaseBdev1", 00:14:41.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.621 "is_configured": false, 00:14:41.621 "data_offset": 0, 00:14:41.621 "data_size": 0 00:14:41.621 }, 00:14:41.621 { 00:14:41.621 "name": "BaseBdev2", 00:14:41.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.621 "is_configured": false, 00:14:41.621 "data_offset": 0, 00:14:41.621 "data_size": 0 00:14:41.621 }, 00:14:41.621 { 00:14:41.621 "name": "BaseBdev3", 00:14:41.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.621 "is_configured": false, 00:14:41.621 "data_offset": 0, 00:14:41.621 "data_size": 0 00:14:41.621 } 00:14:41.621 ] 00:14:41.621 }' 00:14:41.621 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.621 10:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 10:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:42.443 [2024-07-25 10:29:46.149636] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.443 [2024-07-25 10:29:46.149671] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2246620 name Existed_Raid, state configuring 00:14:42.700 10:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:42.700 [2024-07-25 10:29:46.402317] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.700 [2024-07-25 10:29:46.402356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.700 [2024-07-25 10:29:46.402381] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.700 [2024-07-25 10:29:46.402400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.700 [2024-07-25 10:29:46.402407] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.700 [2024-07-25 10:29:46.402417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.957 10:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:43.215 [2024-07-25 10:29:46.667275] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.215 BaseBdev1 00:14:43.215 10:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:43.215 10:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:43.215 10:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:43.215 10:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:43.215 10:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:43.215 10:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:43.215 10:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:43.472 10:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:43.730 [ 00:14:43.730 { 00:14:43.730 "name": "BaseBdev1", 00:14:43.730 "aliases": [ 00:14:43.730 "ea1db7d5-2f5f-4350-928a-94c88292728e" 00:14:43.730 ], 00:14:43.730 "product_name": "Malloc disk", 00:14:43.730 "block_size": 512, 00:14:43.730 "num_blocks": 65536, 00:14:43.730 "uuid": "ea1db7d5-2f5f-4350-928a-94c88292728e", 00:14:43.730 "assigned_rate_limits": { 00:14:43.730 "rw_ios_per_sec": 0, 00:14:43.730 "rw_mbytes_per_sec": 0, 00:14:43.730 "r_mbytes_per_sec": 0, 00:14:43.730 "w_mbytes_per_sec": 0 00:14:43.730 }, 00:14:43.730 "claimed": true, 00:14:43.730 "claim_type": "exclusive_write", 00:14:43.730 "zoned": false, 00:14:43.730 "supported_io_types": { 00:14:43.730 "read": true, 00:14:43.730 "write": true, 00:14:43.730 "unmap": true, 00:14:43.730 "flush": true, 00:14:43.730 "reset": true, 00:14:43.730 "nvme_admin": false, 00:14:43.730 "nvme_io": false, 00:14:43.730 "nvme_io_md": false, 00:14:43.730 "write_zeroes": true, 00:14:43.730 "zcopy": true, 00:14:43.730 "get_zone_info": false, 00:14:43.730 "zone_management": false, 00:14:43.730 "zone_append": false, 00:14:43.730 "compare": false, 00:14:43.730 "compare_and_write": false, 00:14:43.730 "abort": true, 00:14:43.730 "seek_hole": false, 00:14:43.730 "seek_data": false, 00:14:43.730 "copy": true, 00:14:43.730 "nvme_iov_md": false 00:14:43.730 }, 00:14:43.730 "memory_domains": [ 00:14:43.730 { 00:14:43.730 "dma_device_id": "system", 00:14:43.730 "dma_device_type": 1 00:14:43.730 }, 00:14:43.730 { 00:14:43.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.730 "dma_device_type": 2 00:14:43.730 } 00:14:43.730 ], 00:14:43.730 "driver_specific": {} 00:14:43.730 } 00:14:43.730 ] 00:14:43.730 10:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:43.730 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:43.730 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:43.730 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:43.730 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:43.730 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:43.730 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:43.730 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:43.730 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:43.730 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:43.730 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:43.730 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.730 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.988 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:43.988 "name": "Existed_Raid", 00:14:43.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.988 "strip_size_kb": 0, 00:14:43.988 "state": "configuring", 00:14:43.988 "raid_level": "raid1", 00:14:43.988 "superblock": false, 00:14:43.988 "num_base_bdevs": 3, 00:14:43.988 "num_base_bdevs_discovered": 1, 00:14:43.988 "num_base_bdevs_operational": 3, 00:14:43.988 "base_bdevs_list": [ 00:14:43.988 { 00:14:43.988 "name": "BaseBdev1", 00:14:43.988 "uuid": "ea1db7d5-2f5f-4350-928a-94c88292728e", 00:14:43.988 "is_configured": true, 00:14:43.988 "data_offset": 0, 00:14:43.988 "data_size": 65536 00:14:43.988 }, 00:14:43.988 { 00:14:43.988 "name": "BaseBdev2", 00:14:43.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.988 "is_configured": false, 00:14:43.988 "data_offset": 0, 00:14:43.988 "data_size": 0 00:14:43.988 }, 00:14:43.988 { 00:14:43.988 "name": "BaseBdev3", 00:14:43.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.988 "is_configured": false, 00:14:43.988 "data_offset": 0, 00:14:43.988 "data_size": 0 00:14:43.988 } 00:14:43.988 ] 00:14:43.988 }' 00:14:43.988 10:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:43.988 10:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.553 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:44.810 [2024-07-25 10:29:48.287577] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.810 [2024-07-25 10:29:48.287628] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2245e50 name Existed_Raid, state configuring 00:14:44.810 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:45.068 [2024-07-25 10:29:48.532274] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.068 [2024-07-25 10:29:48.533696] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.068 [2024-07-25 10:29:48.533732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.068 [2024-07-25 10:29:48.533758] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.068 [2024-07-25 10:29:48.533769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.068 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.325 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:45.325 "name": "Existed_Raid", 00:14:45.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.325 "strip_size_kb": 0, 00:14:45.325 "state": "configuring", 00:14:45.325 "raid_level": "raid1", 00:14:45.325 "superblock": false, 00:14:45.325 "num_base_bdevs": 3, 00:14:45.325 "num_base_bdevs_discovered": 1, 00:14:45.325 "num_base_bdevs_operational": 3, 00:14:45.325 "base_bdevs_list": [ 00:14:45.325 { 00:14:45.325 "name": "BaseBdev1", 00:14:45.325 "uuid": "ea1db7d5-2f5f-4350-928a-94c88292728e", 00:14:45.325 "is_configured": true, 00:14:45.325 "data_offset": 0, 00:14:45.325 "data_size": 65536 00:14:45.325 }, 00:14:45.325 { 00:14:45.325 "name": "BaseBdev2", 00:14:45.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.325 "is_configured": false, 00:14:45.325 "data_offset": 0, 00:14:45.325 "data_size": 0 00:14:45.325 }, 00:14:45.325 { 00:14:45.325 "name": "BaseBdev3", 00:14:45.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.325 "is_configured": false, 00:14:45.325 "data_offset": 0, 00:14:45.325 "data_size": 0 00:14:45.325 } 00:14:45.325 ] 00:14:45.325 }' 00:14:45.325 10:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:45.325 10:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.891 10:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:45.891 [2024-07-25 10:29:49.589246] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.891 BaseBdev2 00:14:46.149 10:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:46.149 10:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:46.149 10:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:46.149 10:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:46.149 10:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:46.149 10:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:46.149 10:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.149 10:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:46.406 [ 00:14:46.406 { 00:14:46.406 "name": "BaseBdev2", 00:14:46.406 "aliases": [ 00:14:46.406 "71719b84-8826-4803-b3d2-ef4c81514042" 00:14:46.406 ], 00:14:46.406 "product_name": "Malloc disk", 00:14:46.406 "block_size": 512, 00:14:46.406 "num_blocks": 65536, 00:14:46.406 "uuid": "71719b84-8826-4803-b3d2-ef4c81514042", 00:14:46.406 "assigned_rate_limits": { 00:14:46.406 "rw_ios_per_sec": 0, 00:14:46.406 "rw_mbytes_per_sec": 0, 00:14:46.406 "r_mbytes_per_sec": 0, 00:14:46.406 "w_mbytes_per_sec": 0 00:14:46.406 }, 00:14:46.406 "claimed": true, 00:14:46.406 "claim_type": "exclusive_write", 00:14:46.406 "zoned": false, 00:14:46.406 "supported_io_types": { 00:14:46.406 "read": true, 00:14:46.406 "write": true, 00:14:46.406 "unmap": true, 00:14:46.406 "flush": true, 00:14:46.406 "reset": true, 00:14:46.406 "nvme_admin": false, 00:14:46.406 "nvme_io": false, 00:14:46.406 "nvme_io_md": false, 00:14:46.406 "write_zeroes": true, 00:14:46.406 "zcopy": true, 00:14:46.406 "get_zone_info": false, 00:14:46.406 "zone_management": false, 00:14:46.406 "zone_append": false, 00:14:46.406 "compare": false, 00:14:46.406 "compare_and_write": false, 00:14:46.406 "abort": true, 00:14:46.406 "seek_hole": false, 00:14:46.406 "seek_data": false, 00:14:46.406 "copy": true, 00:14:46.406 "nvme_iov_md": false 00:14:46.406 }, 00:14:46.406 "memory_domains": [ 00:14:46.406 { 00:14:46.406 "dma_device_id": "system", 00:14:46.406 "dma_device_type": 1 00:14:46.406 }, 00:14:46.406 { 00:14:46.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.406 "dma_device_type": 2 00:14:46.406 } 00:14:46.406 ], 00:14:46.406 "driver_specific": {} 00:14:46.406 } 00:14:46.406 ] 00:14:46.406 10:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:46.406 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:46.406 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:46.406 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:46.406 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:46.663 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:46.663 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:46.663 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:46.663 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:46.663 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:46.663 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:46.663 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:46.663 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:46.663 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.663 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.663 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.663 "name": "Existed_Raid", 00:14:46.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.663 "strip_size_kb": 0, 00:14:46.663 "state": "configuring", 00:14:46.663 "raid_level": "raid1", 00:14:46.663 "superblock": false, 00:14:46.663 "num_base_bdevs": 3, 00:14:46.663 "num_base_bdevs_discovered": 2, 00:14:46.663 "num_base_bdevs_operational": 3, 00:14:46.663 "base_bdevs_list": [ 00:14:46.663 { 00:14:46.663 "name": "BaseBdev1", 00:14:46.663 "uuid": "ea1db7d5-2f5f-4350-928a-94c88292728e", 00:14:46.663 "is_configured": true, 00:14:46.663 "data_offset": 0, 00:14:46.663 "data_size": 65536 00:14:46.663 }, 00:14:46.663 { 00:14:46.663 "name": "BaseBdev2", 00:14:46.663 "uuid": "71719b84-8826-4803-b3d2-ef4c81514042", 00:14:46.663 "is_configured": true, 00:14:46.663 "data_offset": 0, 00:14:46.663 "data_size": 65536 00:14:46.663 }, 00:14:46.663 { 00:14:46.663 "name": "BaseBdev3", 00:14:46.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.663 "is_configured": false, 00:14:46.663 "data_offset": 0, 00:14:46.663 "data_size": 0 00:14:46.663 } 00:14:46.663 ] 00:14:46.663 }' 00:14:46.663 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.663 10:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.229 10:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:47.794 [2024-07-25 10:29:51.204363] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.794 [2024-07-25 10:29:51.204420] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2246d90 00:14:47.794 [2024-07-25 10:29:51.204431] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:47.794 [2024-07-25 10:29:51.204693] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x224aa90 00:14:47.794 [2024-07-25 10:29:51.204850] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2246d90 00:14:47.794 [2024-07-25 10:29:51.204867] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2246d90 00:14:47.794 [2024-07-25 10:29:51.205078] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.794 BaseBdev3 00:14:47.794 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:47.794 10:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:47.794 10:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:47.794 10:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:47.794 10:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:47.794 10:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:47.794 10:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:48.050 10:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.307 [ 00:14:48.307 { 00:14:48.307 "name": "BaseBdev3", 00:14:48.307 "aliases": [ 00:14:48.307 "545b8a12-0be0-4f7e-b19e-78d1e276aff2" 00:14:48.307 ], 00:14:48.307 "product_name": "Malloc disk", 00:14:48.307 "block_size": 512, 00:14:48.307 "num_blocks": 65536, 00:14:48.307 "uuid": "545b8a12-0be0-4f7e-b19e-78d1e276aff2", 00:14:48.307 "assigned_rate_limits": { 00:14:48.307 "rw_ios_per_sec": 0, 00:14:48.307 "rw_mbytes_per_sec": 0, 00:14:48.307 "r_mbytes_per_sec": 0, 00:14:48.307 "w_mbytes_per_sec": 0 00:14:48.307 }, 00:14:48.307 "claimed": true, 00:14:48.307 "claim_type": "exclusive_write", 00:14:48.307 "zoned": false, 00:14:48.307 "supported_io_types": { 00:14:48.307 "read": true, 00:14:48.307 "write": true, 00:14:48.307 "unmap": true, 00:14:48.307 "flush": true, 00:14:48.307 "reset": true, 00:14:48.307 "nvme_admin": false, 00:14:48.307 "nvme_io": false, 00:14:48.307 "nvme_io_md": false, 00:14:48.307 "write_zeroes": true, 00:14:48.307 "zcopy": true, 00:14:48.307 "get_zone_info": false, 00:14:48.307 "zone_management": false, 00:14:48.307 "zone_append": false, 00:14:48.307 "compare": false, 00:14:48.307 "compare_and_write": false, 00:14:48.307 "abort": true, 00:14:48.307 "seek_hole": false, 00:14:48.307 "seek_data": false, 00:14:48.307 "copy": true, 00:14:48.307 "nvme_iov_md": false 00:14:48.307 }, 00:14:48.307 "memory_domains": [ 00:14:48.307 { 00:14:48.307 "dma_device_id": "system", 00:14:48.307 "dma_device_type": 1 00:14:48.307 }, 00:14:48.307 { 00:14:48.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.307 "dma_device_type": 2 00:14:48.307 } 00:14:48.307 ], 00:14:48.307 "driver_specific": {} 00:14:48.307 } 00:14:48.307 ] 00:14:48.307 10:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:48.307 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:48.307 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:48.307 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:48.307 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:48.307 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:48.307 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:48.308 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:48.308 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:48.308 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:48.308 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:48.308 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:48.308 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:48.308 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.308 10:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.565 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:48.565 "name": "Existed_Raid", 00:14:48.565 "uuid": "0ca7b57e-d051-4fa6-a519-44b508f451e7", 00:14:48.565 "strip_size_kb": 0, 00:14:48.565 "state": "online", 00:14:48.565 "raid_level": "raid1", 00:14:48.565 "superblock": false, 00:14:48.565 "num_base_bdevs": 3, 00:14:48.565 "num_base_bdevs_discovered": 3, 00:14:48.565 "num_base_bdevs_operational": 3, 00:14:48.565 "base_bdevs_list": [ 00:14:48.565 { 00:14:48.565 "name": "BaseBdev1", 00:14:48.565 "uuid": "ea1db7d5-2f5f-4350-928a-94c88292728e", 00:14:48.565 "is_configured": true, 00:14:48.565 "data_offset": 0, 00:14:48.565 "data_size": 65536 00:14:48.565 }, 00:14:48.565 { 00:14:48.565 "name": "BaseBdev2", 00:14:48.565 "uuid": "71719b84-8826-4803-b3d2-ef4c81514042", 00:14:48.565 "is_configured": true, 00:14:48.565 "data_offset": 0, 00:14:48.565 "data_size": 65536 00:14:48.565 }, 00:14:48.565 { 00:14:48.565 "name": "BaseBdev3", 00:14:48.565 "uuid": "545b8a12-0be0-4f7e-b19e-78d1e276aff2", 00:14:48.565 "is_configured": true, 00:14:48.565 "data_offset": 0, 00:14:48.565 "data_size": 65536 00:14:48.565 } 00:14:48.565 ] 00:14:48.565 }' 00:14:48.565 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:48.565 10:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:49.130 [2024-07-25 10:29:52.780754] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:49.130 "name": "Existed_Raid", 00:14:49.130 "aliases": [ 00:14:49.130 "0ca7b57e-d051-4fa6-a519-44b508f451e7" 00:14:49.130 ], 00:14:49.130 "product_name": "Raid Volume", 00:14:49.130 "block_size": 512, 00:14:49.130 "num_blocks": 65536, 00:14:49.130 "uuid": "0ca7b57e-d051-4fa6-a519-44b508f451e7", 00:14:49.130 "assigned_rate_limits": { 00:14:49.130 "rw_ios_per_sec": 0, 00:14:49.130 "rw_mbytes_per_sec": 0, 00:14:49.130 "r_mbytes_per_sec": 0, 00:14:49.130 "w_mbytes_per_sec": 0 00:14:49.130 }, 00:14:49.130 "claimed": false, 00:14:49.130 "zoned": false, 00:14:49.130 "supported_io_types": { 00:14:49.130 "read": true, 00:14:49.130 "write": true, 00:14:49.130 "unmap": false, 00:14:49.130 "flush": false, 00:14:49.130 "reset": true, 00:14:49.130 "nvme_admin": false, 00:14:49.130 "nvme_io": false, 00:14:49.130 "nvme_io_md": false, 00:14:49.130 "write_zeroes": true, 00:14:49.130 "zcopy": false, 00:14:49.130 "get_zone_info": false, 00:14:49.130 "zone_management": false, 00:14:49.130 "zone_append": false, 00:14:49.130 "compare": false, 00:14:49.130 "compare_and_write": false, 00:14:49.130 "abort": false, 00:14:49.130 "seek_hole": false, 00:14:49.130 "seek_data": false, 00:14:49.130 "copy": false, 00:14:49.130 "nvme_iov_md": false 00:14:49.130 }, 00:14:49.130 "memory_domains": [ 00:14:49.130 { 00:14:49.130 "dma_device_id": "system", 00:14:49.130 "dma_device_type": 1 00:14:49.130 }, 00:14:49.130 { 00:14:49.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.130 "dma_device_type": 2 00:14:49.130 }, 00:14:49.130 { 00:14:49.130 "dma_device_id": "system", 00:14:49.130 "dma_device_type": 1 00:14:49.130 }, 00:14:49.130 { 00:14:49.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.130 "dma_device_type": 2 00:14:49.130 }, 00:14:49.130 { 00:14:49.130 "dma_device_id": "system", 00:14:49.130 "dma_device_type": 1 00:14:49.130 }, 00:14:49.130 { 00:14:49.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.130 "dma_device_type": 2 00:14:49.130 } 00:14:49.130 ], 00:14:49.130 "driver_specific": { 00:14:49.130 "raid": { 00:14:49.130 "uuid": "0ca7b57e-d051-4fa6-a519-44b508f451e7", 00:14:49.130 "strip_size_kb": 0, 00:14:49.130 "state": "online", 00:14:49.130 "raid_level": "raid1", 00:14:49.130 "superblock": false, 00:14:49.130 "num_base_bdevs": 3, 00:14:49.130 "num_base_bdevs_discovered": 3, 00:14:49.130 "num_base_bdevs_operational": 3, 00:14:49.130 "base_bdevs_list": [ 00:14:49.130 { 00:14:49.130 "name": "BaseBdev1", 00:14:49.130 "uuid": "ea1db7d5-2f5f-4350-928a-94c88292728e", 00:14:49.130 "is_configured": true, 00:14:49.130 "data_offset": 0, 00:14:49.130 "data_size": 65536 00:14:49.130 }, 00:14:49.130 { 00:14:49.130 "name": "BaseBdev2", 00:14:49.130 "uuid": "71719b84-8826-4803-b3d2-ef4c81514042", 00:14:49.130 "is_configured": true, 00:14:49.130 "data_offset": 0, 00:14:49.130 "data_size": 65536 00:14:49.130 }, 00:14:49.130 { 00:14:49.130 "name": "BaseBdev3", 00:14:49.130 "uuid": "545b8a12-0be0-4f7e-b19e-78d1e276aff2", 00:14:49.130 "is_configured": true, 00:14:49.130 "data_offset": 0, 00:14:49.130 "data_size": 65536 00:14:49.130 } 00:14:49.130 ] 00:14:49.130 } 00:14:49.130 } 00:14:49.130 }' 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:49.130 BaseBdev2 00:14:49.130 BaseBdev3' 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:49.130 10:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:49.389 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:49.389 "name": "BaseBdev1", 00:14:49.389 "aliases": [ 00:14:49.389 "ea1db7d5-2f5f-4350-928a-94c88292728e" 00:14:49.389 ], 00:14:49.389 "product_name": "Malloc disk", 00:14:49.389 "block_size": 512, 00:14:49.389 "num_blocks": 65536, 00:14:49.389 "uuid": "ea1db7d5-2f5f-4350-928a-94c88292728e", 00:14:49.389 "assigned_rate_limits": { 00:14:49.389 "rw_ios_per_sec": 0, 00:14:49.389 "rw_mbytes_per_sec": 0, 00:14:49.389 "r_mbytes_per_sec": 0, 00:14:49.389 "w_mbytes_per_sec": 0 00:14:49.389 }, 00:14:49.389 "claimed": true, 00:14:49.389 "claim_type": "exclusive_write", 00:14:49.389 "zoned": false, 00:14:49.389 "supported_io_types": { 00:14:49.389 "read": true, 00:14:49.389 "write": true, 00:14:49.389 "unmap": true, 00:14:49.389 "flush": true, 00:14:49.389 "reset": true, 00:14:49.389 "nvme_admin": false, 00:14:49.389 "nvme_io": false, 00:14:49.389 "nvme_io_md": false, 00:14:49.389 "write_zeroes": true, 00:14:49.389 "zcopy": true, 00:14:49.389 "get_zone_info": false, 00:14:49.389 "zone_management": false, 00:14:49.389 "zone_append": false, 00:14:49.389 "compare": false, 00:14:49.389 "compare_and_write": false, 00:14:49.389 "abort": true, 00:14:49.389 "seek_hole": false, 00:14:49.389 "seek_data": false, 00:14:49.389 "copy": true, 00:14:49.389 "nvme_iov_md": false 00:14:49.389 }, 00:14:49.389 "memory_domains": [ 00:14:49.389 { 00:14:49.389 "dma_device_id": "system", 00:14:49.389 "dma_device_type": 1 00:14:49.389 }, 00:14:49.389 { 00:14:49.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.389 "dma_device_type": 2 00:14:49.389 } 00:14:49.389 ], 00:14:49.389 "driver_specific": {} 00:14:49.389 }' 00:14:49.389 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:49.646 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:49.903 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:49.903 "name": "BaseBdev2", 00:14:49.903 "aliases": [ 00:14:49.903 "71719b84-8826-4803-b3d2-ef4c81514042" 00:14:49.903 ], 00:14:49.903 "product_name": "Malloc disk", 00:14:49.903 "block_size": 512, 00:14:49.903 "num_blocks": 65536, 00:14:49.903 "uuid": "71719b84-8826-4803-b3d2-ef4c81514042", 00:14:49.903 "assigned_rate_limits": { 00:14:49.903 "rw_ios_per_sec": 0, 00:14:49.903 "rw_mbytes_per_sec": 0, 00:14:49.903 "r_mbytes_per_sec": 0, 00:14:49.903 "w_mbytes_per_sec": 0 00:14:49.903 }, 00:14:49.903 "claimed": true, 00:14:49.903 "claim_type": "exclusive_write", 00:14:49.903 "zoned": false, 00:14:49.903 "supported_io_types": { 00:14:49.903 "read": true, 00:14:49.903 "write": true, 00:14:49.903 "unmap": true, 00:14:49.903 "flush": true, 00:14:49.903 "reset": true, 00:14:49.903 "nvme_admin": false, 00:14:49.903 "nvme_io": false, 00:14:49.903 "nvme_io_md": false, 00:14:49.903 "write_zeroes": true, 00:14:49.903 "zcopy": true, 00:14:49.903 "get_zone_info": false, 00:14:49.903 "zone_management": false, 00:14:49.903 "zone_append": false, 00:14:49.903 "compare": false, 00:14:49.903 "compare_and_write": false, 00:14:49.903 "abort": true, 00:14:49.903 "seek_hole": false, 00:14:49.903 "seek_data": false, 00:14:49.903 "copy": true, 00:14:49.903 "nvme_iov_md": false 00:14:49.903 }, 00:14:49.903 "memory_domains": [ 00:14:49.903 { 00:14:49.903 "dma_device_id": "system", 00:14:49.903 "dma_device_type": 1 00:14:49.903 }, 00:14:49.903 { 00:14:49.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.903 "dma_device_type": 2 00:14:49.903 } 00:14:49.903 ], 00:14:49.903 "driver_specific": {} 00:14:49.903 }' 00:14:49.903 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.161 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.161 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:50.161 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.161 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.161 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:50.161 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.161 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.161 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:50.161 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.161 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.419 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:50.419 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:50.419 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:50.419 10:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:50.419 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:50.419 "name": "BaseBdev3", 00:14:50.419 "aliases": [ 00:14:50.419 "545b8a12-0be0-4f7e-b19e-78d1e276aff2" 00:14:50.419 ], 00:14:50.419 "product_name": "Malloc disk", 00:14:50.419 "block_size": 512, 00:14:50.419 "num_blocks": 65536, 00:14:50.419 "uuid": "545b8a12-0be0-4f7e-b19e-78d1e276aff2", 00:14:50.419 "assigned_rate_limits": { 00:14:50.419 "rw_ios_per_sec": 0, 00:14:50.419 "rw_mbytes_per_sec": 0, 00:14:50.419 "r_mbytes_per_sec": 0, 00:14:50.419 "w_mbytes_per_sec": 0 00:14:50.419 }, 00:14:50.419 "claimed": true, 00:14:50.419 "claim_type": "exclusive_write", 00:14:50.419 "zoned": false, 00:14:50.419 "supported_io_types": { 00:14:50.419 "read": true, 00:14:50.419 "write": true, 00:14:50.419 "unmap": true, 00:14:50.419 "flush": true, 00:14:50.419 "reset": true, 00:14:50.419 "nvme_admin": false, 00:14:50.419 "nvme_io": false, 00:14:50.419 "nvme_io_md": false, 00:14:50.419 "write_zeroes": true, 00:14:50.419 "zcopy": true, 00:14:50.419 "get_zone_info": false, 00:14:50.419 "zone_management": false, 00:14:50.419 "zone_append": false, 00:14:50.419 "compare": false, 00:14:50.419 "compare_and_write": false, 00:14:50.419 "abort": true, 00:14:50.419 "seek_hole": false, 00:14:50.419 "seek_data": false, 00:14:50.419 "copy": true, 00:14:50.420 "nvme_iov_md": false 00:14:50.420 }, 00:14:50.420 "memory_domains": [ 00:14:50.420 { 00:14:50.420 "dma_device_id": "system", 00:14:50.420 "dma_device_type": 1 00:14:50.420 }, 00:14:50.420 { 00:14:50.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.420 "dma_device_type": 2 00:14:50.420 } 00:14:50.420 ], 00:14:50.420 "driver_specific": {} 00:14:50.420 }' 00:14:50.679 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.679 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.679 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:50.679 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.679 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.679 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:50.679 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.679 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.679 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:50.679 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.679 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:50.938 [2024-07-25 10:29:54.613449] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.938 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.196 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.196 "name": "Existed_Raid", 00:14:51.196 "uuid": "0ca7b57e-d051-4fa6-a519-44b508f451e7", 00:14:51.196 "strip_size_kb": 0, 00:14:51.196 "state": "online", 00:14:51.196 "raid_level": "raid1", 00:14:51.196 "superblock": false, 00:14:51.196 "num_base_bdevs": 3, 00:14:51.196 "num_base_bdevs_discovered": 2, 00:14:51.196 "num_base_bdevs_operational": 2, 00:14:51.196 "base_bdevs_list": [ 00:14:51.196 { 00:14:51.196 "name": null, 00:14:51.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.196 "is_configured": false, 00:14:51.196 "data_offset": 0, 00:14:51.196 "data_size": 65536 00:14:51.196 }, 00:14:51.196 { 00:14:51.196 "name": "BaseBdev2", 00:14:51.196 "uuid": "71719b84-8826-4803-b3d2-ef4c81514042", 00:14:51.196 "is_configured": true, 00:14:51.196 "data_offset": 0, 00:14:51.196 "data_size": 65536 00:14:51.196 }, 00:14:51.196 { 00:14:51.196 "name": "BaseBdev3", 00:14:51.196 "uuid": "545b8a12-0be0-4f7e-b19e-78d1e276aff2", 00:14:51.196 "is_configured": true, 00:14:51.196 "data_offset": 0, 00:14:51.196 "data_size": 65536 00:14:51.196 } 00:14:51.196 ] 00:14:51.196 }' 00:14:51.196 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.196 10:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.762 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:51.762 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:51.762 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.762 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:52.021 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:52.021 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:52.021 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:52.278 [2024-07-25 10:29:55.941295] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:52.278 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:52.278 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:52.278 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.278 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:52.536 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:52.536 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:52.536 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:52.793 [2024-07-25 10:29:56.438028] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:52.793 [2024-07-25 10:29:56.438155] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.793 [2024-07-25 10:29:56.459554] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.793 [2024-07-25 10:29:56.459630] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.793 [2024-07-25 10:29:56.459642] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2246d90 name Existed_Raid, state offline 00:14:52.793 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:52.793 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:52.793 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.793 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:53.051 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:53.051 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:53.051 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:53.051 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:53.051 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:53.051 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:53.309 BaseBdev2 00:14:53.309 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:53.309 10:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:53.309 10:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:53.309 10:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:53.309 10:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:53.309 10:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:53.309 10:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:53.567 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:53.825 [ 00:14:53.825 { 00:14:53.825 "name": "BaseBdev2", 00:14:53.825 "aliases": [ 00:14:53.825 "6f4c7afe-03ea-4aa0-9b40-ad92383198a9" 00:14:53.825 ], 00:14:53.825 "product_name": "Malloc disk", 00:14:53.825 "block_size": 512, 00:14:53.825 "num_blocks": 65536, 00:14:53.825 "uuid": "6f4c7afe-03ea-4aa0-9b40-ad92383198a9", 00:14:53.825 "assigned_rate_limits": { 00:14:53.825 "rw_ios_per_sec": 0, 00:14:53.825 "rw_mbytes_per_sec": 0, 00:14:53.825 "r_mbytes_per_sec": 0, 00:14:53.825 "w_mbytes_per_sec": 0 00:14:53.825 }, 00:14:53.825 "claimed": false, 00:14:53.825 "zoned": false, 00:14:53.825 "supported_io_types": { 00:14:53.825 "read": true, 00:14:53.825 "write": true, 00:14:53.825 "unmap": true, 00:14:53.825 "flush": true, 00:14:53.825 "reset": true, 00:14:53.825 "nvme_admin": false, 00:14:53.825 "nvme_io": false, 00:14:53.825 "nvme_io_md": false, 00:14:53.825 "write_zeroes": true, 00:14:53.825 "zcopy": true, 00:14:53.825 "get_zone_info": false, 00:14:53.825 "zone_management": false, 00:14:53.825 "zone_append": false, 00:14:53.825 "compare": false, 00:14:53.825 "compare_and_write": false, 00:14:53.825 "abort": true, 00:14:53.825 "seek_hole": false, 00:14:53.825 "seek_data": false, 00:14:53.825 "copy": true, 00:14:53.825 "nvme_iov_md": false 00:14:53.825 }, 00:14:53.825 "memory_domains": [ 00:14:53.825 { 00:14:53.825 "dma_device_id": "system", 00:14:53.825 "dma_device_type": 1 00:14:53.825 }, 00:14:53.825 { 00:14:53.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.825 "dma_device_type": 2 00:14:53.825 } 00:14:53.825 ], 00:14:53.825 "driver_specific": {} 00:14:53.825 } 00:14:53.825 ] 00:14:53.825 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:53.825 10:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:53.825 10:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:53.825 10:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:54.084 BaseBdev3 00:14:54.084 10:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:54.084 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:54.084 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:54.084 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:54.084 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:54.084 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:54.084 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:54.343 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:54.600 [ 00:14:54.600 { 00:14:54.600 "name": "BaseBdev3", 00:14:54.600 "aliases": [ 00:14:54.600 "a34e7f7e-f3b0-4996-b33f-da78ed1bb306" 00:14:54.600 ], 00:14:54.600 "product_name": "Malloc disk", 00:14:54.600 "block_size": 512, 00:14:54.600 "num_blocks": 65536, 00:14:54.600 "uuid": "a34e7f7e-f3b0-4996-b33f-da78ed1bb306", 00:14:54.600 "assigned_rate_limits": { 00:14:54.600 "rw_ios_per_sec": 0, 00:14:54.600 "rw_mbytes_per_sec": 0, 00:14:54.600 "r_mbytes_per_sec": 0, 00:14:54.600 "w_mbytes_per_sec": 0 00:14:54.600 }, 00:14:54.600 "claimed": false, 00:14:54.600 "zoned": false, 00:14:54.600 "supported_io_types": { 00:14:54.600 "read": true, 00:14:54.600 "write": true, 00:14:54.600 "unmap": true, 00:14:54.600 "flush": true, 00:14:54.600 "reset": true, 00:14:54.600 "nvme_admin": false, 00:14:54.600 "nvme_io": false, 00:14:54.600 "nvme_io_md": false, 00:14:54.600 "write_zeroes": true, 00:14:54.600 "zcopy": true, 00:14:54.600 "get_zone_info": false, 00:14:54.600 "zone_management": false, 00:14:54.600 "zone_append": false, 00:14:54.600 "compare": false, 00:14:54.600 "compare_and_write": false, 00:14:54.600 "abort": true, 00:14:54.600 "seek_hole": false, 00:14:54.600 "seek_data": false, 00:14:54.600 "copy": true, 00:14:54.600 "nvme_iov_md": false 00:14:54.600 }, 00:14:54.600 "memory_domains": [ 00:14:54.600 { 00:14:54.600 "dma_device_id": "system", 00:14:54.600 "dma_device_type": 1 00:14:54.600 }, 00:14:54.600 { 00:14:54.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.600 "dma_device_type": 2 00:14:54.600 } 00:14:54.600 ], 00:14:54.600 "driver_specific": {} 00:14:54.600 } 00:14:54.600 ] 00:14:54.600 10:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:54.600 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:54.601 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:54.601 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:54.858 [2024-07-25 10:29:58.401827] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.858 [2024-07-25 10:29:58.401868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.858 [2024-07-25 10:29:58.401897] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.858 [2024-07-25 10:29:58.403242] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.858 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:54.858 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.858 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:54.858 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:54.858 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:54.858 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:54.859 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.859 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.859 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.859 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.859 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.859 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.146 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.146 "name": "Existed_Raid", 00:14:55.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.146 "strip_size_kb": 0, 00:14:55.146 "state": "configuring", 00:14:55.146 "raid_level": "raid1", 00:14:55.146 "superblock": false, 00:14:55.146 "num_base_bdevs": 3, 00:14:55.146 "num_base_bdevs_discovered": 2, 00:14:55.146 "num_base_bdevs_operational": 3, 00:14:55.146 "base_bdevs_list": [ 00:14:55.146 { 00:14:55.146 "name": "BaseBdev1", 00:14:55.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.146 "is_configured": false, 00:14:55.146 "data_offset": 0, 00:14:55.146 "data_size": 0 00:14:55.146 }, 00:14:55.146 { 00:14:55.146 "name": "BaseBdev2", 00:14:55.146 "uuid": "6f4c7afe-03ea-4aa0-9b40-ad92383198a9", 00:14:55.146 "is_configured": true, 00:14:55.146 "data_offset": 0, 00:14:55.146 "data_size": 65536 00:14:55.146 }, 00:14:55.146 { 00:14:55.146 "name": "BaseBdev3", 00:14:55.146 "uuid": "a34e7f7e-f3b0-4996-b33f-da78ed1bb306", 00:14:55.146 "is_configured": true, 00:14:55.146 "data_offset": 0, 00:14:55.146 "data_size": 65536 00:14:55.146 } 00:14:55.146 ] 00:14:55.146 }' 00:14:55.146 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.146 10:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.711 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:55.969 [2024-07-25 10:29:59.484679] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.969 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:55.969 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:55.969 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:55.969 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:55.969 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:55.969 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:55.969 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.969 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.969 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.969 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.969 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.969 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.227 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:56.227 "name": "Existed_Raid", 00:14:56.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.227 "strip_size_kb": 0, 00:14:56.227 "state": "configuring", 00:14:56.227 "raid_level": "raid1", 00:14:56.227 "superblock": false, 00:14:56.227 "num_base_bdevs": 3, 00:14:56.227 "num_base_bdevs_discovered": 1, 00:14:56.227 "num_base_bdevs_operational": 3, 00:14:56.227 "base_bdevs_list": [ 00:14:56.227 { 00:14:56.227 "name": "BaseBdev1", 00:14:56.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.227 "is_configured": false, 00:14:56.227 "data_offset": 0, 00:14:56.227 "data_size": 0 00:14:56.227 }, 00:14:56.227 { 00:14:56.227 "name": null, 00:14:56.227 "uuid": "6f4c7afe-03ea-4aa0-9b40-ad92383198a9", 00:14:56.227 "is_configured": false, 00:14:56.227 "data_offset": 0, 00:14:56.227 "data_size": 65536 00:14:56.227 }, 00:14:56.227 { 00:14:56.227 "name": "BaseBdev3", 00:14:56.227 "uuid": "a34e7f7e-f3b0-4996-b33f-da78ed1bb306", 00:14:56.227 "is_configured": true, 00:14:56.227 "data_offset": 0, 00:14:56.227 "data_size": 65536 00:14:56.227 } 00:14:56.227 ] 00:14:56.227 }' 00:14:56.227 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:56.227 10:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.792 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.792 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:57.065 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:57.065 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:57.322 [2024-07-25 10:30:00.795773] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.322 BaseBdev1 00:14:57.322 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:57.322 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:57.322 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:57.322 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:57.322 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:57.322 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:57.322 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:57.580 10:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:57.838 [ 00:14:57.838 { 00:14:57.838 "name": "BaseBdev1", 00:14:57.838 "aliases": [ 00:14:57.838 "392d84e3-a531-4e0e-9cba-c6f2c70af7e2" 00:14:57.838 ], 00:14:57.838 "product_name": "Malloc disk", 00:14:57.838 "block_size": 512, 00:14:57.838 "num_blocks": 65536, 00:14:57.838 "uuid": "392d84e3-a531-4e0e-9cba-c6f2c70af7e2", 00:14:57.838 "assigned_rate_limits": { 00:14:57.838 "rw_ios_per_sec": 0, 00:14:57.838 "rw_mbytes_per_sec": 0, 00:14:57.838 "r_mbytes_per_sec": 0, 00:14:57.838 "w_mbytes_per_sec": 0 00:14:57.838 }, 00:14:57.838 "claimed": true, 00:14:57.838 "claim_type": "exclusive_write", 00:14:57.838 "zoned": false, 00:14:57.838 "supported_io_types": { 00:14:57.838 "read": true, 00:14:57.838 "write": true, 00:14:57.838 "unmap": true, 00:14:57.838 "flush": true, 00:14:57.838 "reset": true, 00:14:57.838 "nvme_admin": false, 00:14:57.838 "nvme_io": false, 00:14:57.838 "nvme_io_md": false, 00:14:57.838 "write_zeroes": true, 00:14:57.838 "zcopy": true, 00:14:57.838 "get_zone_info": false, 00:14:57.838 "zone_management": false, 00:14:57.838 "zone_append": false, 00:14:57.838 "compare": false, 00:14:57.838 "compare_and_write": false, 00:14:57.838 "abort": true, 00:14:57.838 "seek_hole": false, 00:14:57.838 "seek_data": false, 00:14:57.838 "copy": true, 00:14:57.838 "nvme_iov_md": false 00:14:57.838 }, 00:14:57.838 "memory_domains": [ 00:14:57.838 { 00:14:57.838 "dma_device_id": "system", 00:14:57.838 "dma_device_type": 1 00:14:57.838 }, 00:14:57.838 { 00:14:57.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.838 "dma_device_type": 2 00:14:57.838 } 00:14:57.838 ], 00:14:57.838 "driver_specific": {} 00:14:57.838 } 00:14:57.838 ] 00:14:57.838 10:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:57.838 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:57.838 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.838 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:57.838 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:57.838 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:57.838 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:57.838 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.838 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.838 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.838 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.838 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.838 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.097 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:58.097 "name": "Existed_Raid", 00:14:58.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.097 "strip_size_kb": 0, 00:14:58.097 "state": "configuring", 00:14:58.097 "raid_level": "raid1", 00:14:58.097 "superblock": false, 00:14:58.097 "num_base_bdevs": 3, 00:14:58.097 "num_base_bdevs_discovered": 2, 00:14:58.097 "num_base_bdevs_operational": 3, 00:14:58.097 "base_bdevs_list": [ 00:14:58.097 { 00:14:58.097 "name": "BaseBdev1", 00:14:58.097 "uuid": "392d84e3-a531-4e0e-9cba-c6f2c70af7e2", 00:14:58.097 "is_configured": true, 00:14:58.097 "data_offset": 0, 00:14:58.097 "data_size": 65536 00:14:58.097 }, 00:14:58.097 { 00:14:58.097 "name": null, 00:14:58.097 "uuid": "6f4c7afe-03ea-4aa0-9b40-ad92383198a9", 00:14:58.097 "is_configured": false, 00:14:58.097 "data_offset": 0, 00:14:58.097 "data_size": 65536 00:14:58.097 }, 00:14:58.097 { 00:14:58.097 "name": "BaseBdev3", 00:14:58.097 "uuid": "a34e7f7e-f3b0-4996-b33f-da78ed1bb306", 00:14:58.097 "is_configured": true, 00:14:58.097 "data_offset": 0, 00:14:58.097 "data_size": 65536 00:14:58.097 } 00:14:58.097 ] 00:14:58.097 }' 00:14:58.097 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:58.097 10:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.663 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.663 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:58.921 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:58.921 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:59.179 [2024-07-25 10:30:02.652695] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:59.180 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:59.180 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.180 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:59.180 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:59.180 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:59.180 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:59.180 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.180 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.180 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.180 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.180 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.180 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.438 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.438 "name": "Existed_Raid", 00:14:59.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.438 "strip_size_kb": 0, 00:14:59.438 "state": "configuring", 00:14:59.438 "raid_level": "raid1", 00:14:59.438 "superblock": false, 00:14:59.438 "num_base_bdevs": 3, 00:14:59.438 "num_base_bdevs_discovered": 1, 00:14:59.438 "num_base_bdevs_operational": 3, 00:14:59.438 "base_bdevs_list": [ 00:14:59.438 { 00:14:59.438 "name": "BaseBdev1", 00:14:59.438 "uuid": "392d84e3-a531-4e0e-9cba-c6f2c70af7e2", 00:14:59.438 "is_configured": true, 00:14:59.438 "data_offset": 0, 00:14:59.438 "data_size": 65536 00:14:59.438 }, 00:14:59.438 { 00:14:59.438 "name": null, 00:14:59.438 "uuid": "6f4c7afe-03ea-4aa0-9b40-ad92383198a9", 00:14:59.438 "is_configured": false, 00:14:59.438 "data_offset": 0, 00:14:59.438 "data_size": 65536 00:14:59.438 }, 00:14:59.438 { 00:14:59.438 "name": null, 00:14:59.438 "uuid": "a34e7f7e-f3b0-4996-b33f-da78ed1bb306", 00:14:59.438 "is_configured": false, 00:14:59.438 "data_offset": 0, 00:14:59.438 "data_size": 65536 00:14:59.438 } 00:14:59.438 ] 00:14:59.438 }' 00:14:59.438 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.438 10:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.003 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.003 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:00.260 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:00.260 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:00.260 [2024-07-25 10:30:03.964198] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.518 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:00.518 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:00.518 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:00.518 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:00.518 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:00.518 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:00.518 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:00.518 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:00.518 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:00.518 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:00.518 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.518 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.776 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.776 "name": "Existed_Raid", 00:15:00.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.776 "strip_size_kb": 0, 00:15:00.776 "state": "configuring", 00:15:00.776 "raid_level": "raid1", 00:15:00.776 "superblock": false, 00:15:00.776 "num_base_bdevs": 3, 00:15:00.776 "num_base_bdevs_discovered": 2, 00:15:00.776 "num_base_bdevs_operational": 3, 00:15:00.776 "base_bdevs_list": [ 00:15:00.776 { 00:15:00.776 "name": "BaseBdev1", 00:15:00.776 "uuid": "392d84e3-a531-4e0e-9cba-c6f2c70af7e2", 00:15:00.776 "is_configured": true, 00:15:00.776 "data_offset": 0, 00:15:00.776 "data_size": 65536 00:15:00.776 }, 00:15:00.776 { 00:15:00.776 "name": null, 00:15:00.776 "uuid": "6f4c7afe-03ea-4aa0-9b40-ad92383198a9", 00:15:00.776 "is_configured": false, 00:15:00.776 "data_offset": 0, 00:15:00.776 "data_size": 65536 00:15:00.776 }, 00:15:00.776 { 00:15:00.776 "name": "BaseBdev3", 00:15:00.776 "uuid": "a34e7f7e-f3b0-4996-b33f-da78ed1bb306", 00:15:00.776 "is_configured": true, 00:15:00.776 "data_offset": 0, 00:15:00.776 "data_size": 65536 00:15:00.776 } 00:15:00.776 ] 00:15:00.776 }' 00:15:00.776 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.776 10:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.342 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.342 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:01.600 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:01.600 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:01.858 [2024-07-25 10:30:05.347859] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.858 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:01.858 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.858 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:01.858 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:01.858 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:01.858 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:01.858 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.858 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.858 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.858 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.858 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.858 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.116 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.116 "name": "Existed_Raid", 00:15:02.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.117 "strip_size_kb": 0, 00:15:02.117 "state": "configuring", 00:15:02.117 "raid_level": "raid1", 00:15:02.117 "superblock": false, 00:15:02.117 "num_base_bdevs": 3, 00:15:02.117 "num_base_bdevs_discovered": 1, 00:15:02.117 "num_base_bdevs_operational": 3, 00:15:02.117 "base_bdevs_list": [ 00:15:02.117 { 00:15:02.117 "name": null, 00:15:02.117 "uuid": "392d84e3-a531-4e0e-9cba-c6f2c70af7e2", 00:15:02.117 "is_configured": false, 00:15:02.117 "data_offset": 0, 00:15:02.117 "data_size": 65536 00:15:02.117 }, 00:15:02.117 { 00:15:02.117 "name": null, 00:15:02.117 "uuid": "6f4c7afe-03ea-4aa0-9b40-ad92383198a9", 00:15:02.117 "is_configured": false, 00:15:02.117 "data_offset": 0, 00:15:02.117 "data_size": 65536 00:15:02.117 }, 00:15:02.117 { 00:15:02.117 "name": "BaseBdev3", 00:15:02.117 "uuid": "a34e7f7e-f3b0-4996-b33f-da78ed1bb306", 00:15:02.117 "is_configured": true, 00:15:02.117 "data_offset": 0, 00:15:02.117 "data_size": 65536 00:15:02.117 } 00:15:02.117 ] 00:15:02.117 }' 00:15:02.117 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.117 10:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.683 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.683 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:02.941 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:02.941 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:03.200 [2024-07-25 10:30:06.690113] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.200 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:03.200 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:03.200 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:03.200 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:03.200 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:03.200 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:03.200 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.200 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.200 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.200 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.200 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.200 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.458 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.458 "name": "Existed_Raid", 00:15:03.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.458 "strip_size_kb": 0, 00:15:03.458 "state": "configuring", 00:15:03.458 "raid_level": "raid1", 00:15:03.458 "superblock": false, 00:15:03.458 "num_base_bdevs": 3, 00:15:03.458 "num_base_bdevs_discovered": 2, 00:15:03.458 "num_base_bdevs_operational": 3, 00:15:03.458 "base_bdevs_list": [ 00:15:03.458 { 00:15:03.458 "name": null, 00:15:03.458 "uuid": "392d84e3-a531-4e0e-9cba-c6f2c70af7e2", 00:15:03.458 "is_configured": false, 00:15:03.458 "data_offset": 0, 00:15:03.458 "data_size": 65536 00:15:03.458 }, 00:15:03.458 { 00:15:03.458 "name": "BaseBdev2", 00:15:03.458 "uuid": "6f4c7afe-03ea-4aa0-9b40-ad92383198a9", 00:15:03.458 "is_configured": true, 00:15:03.458 "data_offset": 0, 00:15:03.458 "data_size": 65536 00:15:03.458 }, 00:15:03.458 { 00:15:03.458 "name": "BaseBdev3", 00:15:03.458 "uuid": "a34e7f7e-f3b0-4996-b33f-da78ed1bb306", 00:15:03.458 "is_configured": true, 00:15:03.458 "data_offset": 0, 00:15:03.458 "data_size": 65536 00:15:03.458 } 00:15:03.458 ] 00:15:03.458 }' 00:15:03.458 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.458 10:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.029 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.029 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:04.288 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:04.288 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.288 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:04.288 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 392d84e3-a531-4e0e-9cba-c6f2c70af7e2 00:15:04.547 [2024-07-25 10:30:08.209823] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:04.547 [2024-07-25 10:30:08.209890] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x224aa30 00:15:04.547 [2024-07-25 10:30:08.209899] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:04.547 [2024-07-25 10:30:08.210085] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x224df20 00:15:04.547 [2024-07-25 10:30:08.210255] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x224aa30 00:15:04.547 [2024-07-25 10:30:08.210268] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x224aa30 00:15:04.547 [2024-07-25 10:30:08.210472] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.547 NewBaseBdev 00:15:04.547 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:04.547 10:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:04.547 10:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:04.547 10:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:04.547 10:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:04.547 10:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:04.547 10:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:04.806 10:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:05.064 [ 00:15:05.064 { 00:15:05.064 "name": "NewBaseBdev", 00:15:05.064 "aliases": [ 00:15:05.064 "392d84e3-a531-4e0e-9cba-c6f2c70af7e2" 00:15:05.064 ], 00:15:05.064 "product_name": "Malloc disk", 00:15:05.064 "block_size": 512, 00:15:05.064 "num_blocks": 65536, 00:15:05.064 "uuid": "392d84e3-a531-4e0e-9cba-c6f2c70af7e2", 00:15:05.064 "assigned_rate_limits": { 00:15:05.064 "rw_ios_per_sec": 0, 00:15:05.064 "rw_mbytes_per_sec": 0, 00:15:05.064 "r_mbytes_per_sec": 0, 00:15:05.064 "w_mbytes_per_sec": 0 00:15:05.064 }, 00:15:05.064 "claimed": true, 00:15:05.064 "claim_type": "exclusive_write", 00:15:05.064 "zoned": false, 00:15:05.064 "supported_io_types": { 00:15:05.064 "read": true, 00:15:05.064 "write": true, 00:15:05.064 "unmap": true, 00:15:05.064 "flush": true, 00:15:05.064 "reset": true, 00:15:05.064 "nvme_admin": false, 00:15:05.064 "nvme_io": false, 00:15:05.064 "nvme_io_md": false, 00:15:05.064 "write_zeroes": true, 00:15:05.064 "zcopy": true, 00:15:05.064 "get_zone_info": false, 00:15:05.064 "zone_management": false, 00:15:05.064 "zone_append": false, 00:15:05.064 "compare": false, 00:15:05.064 "compare_and_write": false, 00:15:05.064 "abort": true, 00:15:05.064 "seek_hole": false, 00:15:05.064 "seek_data": false, 00:15:05.064 "copy": true, 00:15:05.064 "nvme_iov_md": false 00:15:05.064 }, 00:15:05.064 "memory_domains": [ 00:15:05.064 { 00:15:05.064 "dma_device_id": "system", 00:15:05.064 "dma_device_type": 1 00:15:05.064 }, 00:15:05.064 { 00:15:05.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.064 "dma_device_type": 2 00:15:05.064 } 00:15:05.064 ], 00:15:05.064 "driver_specific": {} 00:15:05.064 } 00:15:05.064 ] 00:15:05.064 10:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:05.064 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:05.064 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:05.064 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:05.064 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:05.064 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:05.064 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:05.064 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.064 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.064 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.064 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.064 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.064 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.323 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.323 "name": "Existed_Raid", 00:15:05.323 "uuid": "20c7e5f7-5fe5-46e4-aa80-4bc3050e4b9e", 00:15:05.323 "strip_size_kb": 0, 00:15:05.323 "state": "online", 00:15:05.323 "raid_level": "raid1", 00:15:05.323 "superblock": false, 00:15:05.323 "num_base_bdevs": 3, 00:15:05.323 "num_base_bdevs_discovered": 3, 00:15:05.323 "num_base_bdevs_operational": 3, 00:15:05.323 "base_bdevs_list": [ 00:15:05.323 { 00:15:05.323 "name": "NewBaseBdev", 00:15:05.323 "uuid": "392d84e3-a531-4e0e-9cba-c6f2c70af7e2", 00:15:05.323 "is_configured": true, 00:15:05.323 "data_offset": 0, 00:15:05.323 "data_size": 65536 00:15:05.323 }, 00:15:05.323 { 00:15:05.323 "name": "BaseBdev2", 00:15:05.323 "uuid": "6f4c7afe-03ea-4aa0-9b40-ad92383198a9", 00:15:05.323 "is_configured": true, 00:15:05.323 "data_offset": 0, 00:15:05.323 "data_size": 65536 00:15:05.323 }, 00:15:05.323 { 00:15:05.323 "name": "BaseBdev3", 00:15:05.323 "uuid": "a34e7f7e-f3b0-4996-b33f-da78ed1bb306", 00:15:05.323 "is_configured": true, 00:15:05.323 "data_offset": 0, 00:15:05.323 "data_size": 65536 00:15:05.323 } 00:15:05.323 ] 00:15:05.323 }' 00:15:05.323 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.323 10:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.888 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:05.888 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:05.888 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:05.888 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:05.888 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:05.888 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:05.889 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:05.889 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:06.147 [2024-07-25 10:30:09.802278] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.147 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:06.147 "name": "Existed_Raid", 00:15:06.147 "aliases": [ 00:15:06.147 "20c7e5f7-5fe5-46e4-aa80-4bc3050e4b9e" 00:15:06.147 ], 00:15:06.147 "product_name": "Raid Volume", 00:15:06.147 "block_size": 512, 00:15:06.147 "num_blocks": 65536, 00:15:06.147 "uuid": "20c7e5f7-5fe5-46e4-aa80-4bc3050e4b9e", 00:15:06.147 "assigned_rate_limits": { 00:15:06.147 "rw_ios_per_sec": 0, 00:15:06.147 "rw_mbytes_per_sec": 0, 00:15:06.147 "r_mbytes_per_sec": 0, 00:15:06.147 "w_mbytes_per_sec": 0 00:15:06.147 }, 00:15:06.147 "claimed": false, 00:15:06.147 "zoned": false, 00:15:06.147 "supported_io_types": { 00:15:06.147 "read": true, 00:15:06.147 "write": true, 00:15:06.147 "unmap": false, 00:15:06.147 "flush": false, 00:15:06.147 "reset": true, 00:15:06.147 "nvme_admin": false, 00:15:06.147 "nvme_io": false, 00:15:06.147 "nvme_io_md": false, 00:15:06.147 "write_zeroes": true, 00:15:06.147 "zcopy": false, 00:15:06.147 "get_zone_info": false, 00:15:06.147 "zone_management": false, 00:15:06.147 "zone_append": false, 00:15:06.147 "compare": false, 00:15:06.147 "compare_and_write": false, 00:15:06.147 "abort": false, 00:15:06.147 "seek_hole": false, 00:15:06.147 "seek_data": false, 00:15:06.147 "copy": false, 00:15:06.147 "nvme_iov_md": false 00:15:06.147 }, 00:15:06.147 "memory_domains": [ 00:15:06.147 { 00:15:06.147 "dma_device_id": "system", 00:15:06.147 "dma_device_type": 1 00:15:06.147 }, 00:15:06.147 { 00:15:06.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.147 "dma_device_type": 2 00:15:06.147 }, 00:15:06.147 { 00:15:06.147 "dma_device_id": "system", 00:15:06.147 "dma_device_type": 1 00:15:06.147 }, 00:15:06.147 { 00:15:06.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.147 "dma_device_type": 2 00:15:06.147 }, 00:15:06.147 { 00:15:06.147 "dma_device_id": "system", 00:15:06.147 "dma_device_type": 1 00:15:06.147 }, 00:15:06.147 { 00:15:06.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.147 "dma_device_type": 2 00:15:06.147 } 00:15:06.147 ], 00:15:06.147 "driver_specific": { 00:15:06.147 "raid": { 00:15:06.147 "uuid": "20c7e5f7-5fe5-46e4-aa80-4bc3050e4b9e", 00:15:06.147 "strip_size_kb": 0, 00:15:06.147 "state": "online", 00:15:06.147 "raid_level": "raid1", 00:15:06.147 "superblock": false, 00:15:06.147 "num_base_bdevs": 3, 00:15:06.147 "num_base_bdevs_discovered": 3, 00:15:06.148 "num_base_bdevs_operational": 3, 00:15:06.148 "base_bdevs_list": [ 00:15:06.148 { 00:15:06.148 "name": "NewBaseBdev", 00:15:06.148 "uuid": "392d84e3-a531-4e0e-9cba-c6f2c70af7e2", 00:15:06.148 "is_configured": true, 00:15:06.148 "data_offset": 0, 00:15:06.148 "data_size": 65536 00:15:06.148 }, 00:15:06.148 { 00:15:06.148 "name": "BaseBdev2", 00:15:06.148 "uuid": "6f4c7afe-03ea-4aa0-9b40-ad92383198a9", 00:15:06.148 "is_configured": true, 00:15:06.148 "data_offset": 0, 00:15:06.148 "data_size": 65536 00:15:06.148 }, 00:15:06.148 { 00:15:06.148 "name": "BaseBdev3", 00:15:06.148 "uuid": "a34e7f7e-f3b0-4996-b33f-da78ed1bb306", 00:15:06.148 "is_configured": true, 00:15:06.148 "data_offset": 0, 00:15:06.148 "data_size": 65536 00:15:06.148 } 00:15:06.148 ] 00:15:06.148 } 00:15:06.148 } 00:15:06.148 }' 00:15:06.148 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:06.148 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:06.148 BaseBdev2 00:15:06.148 BaseBdev3' 00:15:06.406 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:06.406 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:06.406 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:06.664 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:06.664 "name": "NewBaseBdev", 00:15:06.664 "aliases": [ 00:15:06.664 "392d84e3-a531-4e0e-9cba-c6f2c70af7e2" 00:15:06.664 ], 00:15:06.664 "product_name": "Malloc disk", 00:15:06.664 "block_size": 512, 00:15:06.664 "num_blocks": 65536, 00:15:06.664 "uuid": "392d84e3-a531-4e0e-9cba-c6f2c70af7e2", 00:15:06.664 "assigned_rate_limits": { 00:15:06.664 "rw_ios_per_sec": 0, 00:15:06.664 "rw_mbytes_per_sec": 0, 00:15:06.664 "r_mbytes_per_sec": 0, 00:15:06.664 "w_mbytes_per_sec": 0 00:15:06.664 }, 00:15:06.665 "claimed": true, 00:15:06.665 "claim_type": "exclusive_write", 00:15:06.665 "zoned": false, 00:15:06.665 "supported_io_types": { 00:15:06.665 "read": true, 00:15:06.665 "write": true, 00:15:06.665 "unmap": true, 00:15:06.665 "flush": true, 00:15:06.665 "reset": true, 00:15:06.665 "nvme_admin": false, 00:15:06.665 "nvme_io": false, 00:15:06.665 "nvme_io_md": false, 00:15:06.665 "write_zeroes": true, 00:15:06.665 "zcopy": true, 00:15:06.665 "get_zone_info": false, 00:15:06.665 "zone_management": false, 00:15:06.665 "zone_append": false, 00:15:06.665 "compare": false, 00:15:06.665 "compare_and_write": false, 00:15:06.665 "abort": true, 00:15:06.665 "seek_hole": false, 00:15:06.665 "seek_data": false, 00:15:06.665 "copy": true, 00:15:06.665 "nvme_iov_md": false 00:15:06.665 }, 00:15:06.665 "memory_domains": [ 00:15:06.665 { 00:15:06.665 "dma_device_id": "system", 00:15:06.665 "dma_device_type": 1 00:15:06.665 }, 00:15:06.665 { 00:15:06.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.665 "dma_device_type": 2 00:15:06.665 } 00:15:06.665 ], 00:15:06.665 "driver_specific": {} 00:15:06.665 }' 00:15:06.665 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.665 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.665 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:06.665 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.665 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.665 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:06.665 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.665 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.665 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:06.665 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.665 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.923 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:06.923 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:06.923 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:06.923 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:07.183 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:07.183 "name": "BaseBdev2", 00:15:07.183 "aliases": [ 00:15:07.183 "6f4c7afe-03ea-4aa0-9b40-ad92383198a9" 00:15:07.183 ], 00:15:07.183 "product_name": "Malloc disk", 00:15:07.183 "block_size": 512, 00:15:07.183 "num_blocks": 65536, 00:15:07.183 "uuid": "6f4c7afe-03ea-4aa0-9b40-ad92383198a9", 00:15:07.183 "assigned_rate_limits": { 00:15:07.183 "rw_ios_per_sec": 0, 00:15:07.183 "rw_mbytes_per_sec": 0, 00:15:07.183 "r_mbytes_per_sec": 0, 00:15:07.183 "w_mbytes_per_sec": 0 00:15:07.183 }, 00:15:07.183 "claimed": true, 00:15:07.183 "claim_type": "exclusive_write", 00:15:07.183 "zoned": false, 00:15:07.183 "supported_io_types": { 00:15:07.183 "read": true, 00:15:07.183 "write": true, 00:15:07.183 "unmap": true, 00:15:07.183 "flush": true, 00:15:07.183 "reset": true, 00:15:07.183 "nvme_admin": false, 00:15:07.183 "nvme_io": false, 00:15:07.183 "nvme_io_md": false, 00:15:07.183 "write_zeroes": true, 00:15:07.183 "zcopy": true, 00:15:07.183 "get_zone_info": false, 00:15:07.183 "zone_management": false, 00:15:07.183 "zone_append": false, 00:15:07.183 "compare": false, 00:15:07.183 "compare_and_write": false, 00:15:07.183 "abort": true, 00:15:07.183 "seek_hole": false, 00:15:07.183 "seek_data": false, 00:15:07.183 "copy": true, 00:15:07.183 "nvme_iov_md": false 00:15:07.183 }, 00:15:07.183 "memory_domains": [ 00:15:07.183 { 00:15:07.183 "dma_device_id": "system", 00:15:07.183 "dma_device_type": 1 00:15:07.183 }, 00:15:07.183 { 00:15:07.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.183 "dma_device_type": 2 00:15:07.183 } 00:15:07.183 ], 00:15:07.183 "driver_specific": {} 00:15:07.183 }' 00:15:07.183 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.183 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.183 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:07.183 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.183 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.183 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:07.183 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.183 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.183 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:07.183 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.183 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.442 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.442 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.442 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:07.442 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:07.442 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:07.442 "name": "BaseBdev3", 00:15:07.442 "aliases": [ 00:15:07.442 "a34e7f7e-f3b0-4996-b33f-da78ed1bb306" 00:15:07.442 ], 00:15:07.442 "product_name": "Malloc disk", 00:15:07.442 "block_size": 512, 00:15:07.442 "num_blocks": 65536, 00:15:07.442 "uuid": "a34e7f7e-f3b0-4996-b33f-da78ed1bb306", 00:15:07.442 "assigned_rate_limits": { 00:15:07.442 "rw_ios_per_sec": 0, 00:15:07.442 "rw_mbytes_per_sec": 0, 00:15:07.442 "r_mbytes_per_sec": 0, 00:15:07.442 "w_mbytes_per_sec": 0 00:15:07.442 }, 00:15:07.442 "claimed": true, 00:15:07.442 "claim_type": "exclusive_write", 00:15:07.442 "zoned": false, 00:15:07.442 "supported_io_types": { 00:15:07.442 "read": true, 00:15:07.442 "write": true, 00:15:07.442 "unmap": true, 00:15:07.442 "flush": true, 00:15:07.442 "reset": true, 00:15:07.442 "nvme_admin": false, 00:15:07.442 "nvme_io": false, 00:15:07.442 "nvme_io_md": false, 00:15:07.442 "write_zeroes": true, 00:15:07.442 "zcopy": true, 00:15:07.442 "get_zone_info": false, 00:15:07.442 "zone_management": false, 00:15:07.442 "zone_append": false, 00:15:07.442 "compare": false, 00:15:07.442 "compare_and_write": false, 00:15:07.442 "abort": true, 00:15:07.442 "seek_hole": false, 00:15:07.442 "seek_data": false, 00:15:07.442 "copy": true, 00:15:07.442 "nvme_iov_md": false 00:15:07.442 }, 00:15:07.442 "memory_domains": [ 00:15:07.442 { 00:15:07.442 "dma_device_id": "system", 00:15:07.442 "dma_device_type": 1 00:15:07.442 }, 00:15:07.442 { 00:15:07.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.442 "dma_device_type": 2 00:15:07.442 } 00:15:07.442 ], 00:15:07.442 "driver_specific": {} 00:15:07.442 }' 00:15:07.442 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.700 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.700 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:07.700 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.700 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.700 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:07.700 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.700 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.700 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:07.700 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.701 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.958 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.958 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:08.216 [2024-07-25 10:30:11.679338] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:08.216 [2024-07-25 10:30:11.679368] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.216 [2024-07-25 10:30:11.679432] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.216 [2024-07-25 10:30:11.679692] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.216 [2024-07-25 10:30:11.679706] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x224aa30 name Existed_Raid, state offline 00:15:08.216 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2375176 00:15:08.217 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2375176 ']' 00:15:08.217 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2375176 00:15:08.217 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:08.217 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:08.217 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2375176 00:15:08.217 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:08.217 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:08.217 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2375176' 00:15:08.217 killing process with pid 2375176 00:15:08.217 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2375176 00:15:08.217 [2024-07-25 10:30:11.729237] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.217 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2375176 00:15:08.217 [2024-07-25 10:30:11.789707] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:08.783 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:08.783 00:15:08.783 real 0m28.415s 00:15:08.783 user 0m52.768s 00:15:08.783 sys 0m3.803s 00:15:08.783 10:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:08.783 10:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.783 ************************************ 00:15:08.783 END TEST raid_state_function_test 00:15:08.783 ************************************ 00:15:08.783 10:30:12 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:15:08.783 10:30:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:08.783 10:30:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.783 10:30:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:08.784 ************************************ 00:15:08.784 START TEST raid_state_function_test_sb 00:15:08.784 ************************************ 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2379833 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2379833' 00:15:08.784 Process raid pid: 2379833 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2379833 /var/tmp/spdk-raid.sock 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2379833 ']' 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:08.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.784 10:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.784 [2024-07-25 10:30:12.326857] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:15:08.784 [2024-07-25 10:30:12.326941] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.784 [2024-07-25 10:30:12.437228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.041 [2024-07-25 10:30:12.575687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.041 [2024-07-25 10:30:12.650166] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.041 [2024-07-25 10:30:12.650229] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.041 10:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.041 10:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:09.042 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:09.299 [2024-07-25 10:30:12.938158] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.299 [2024-07-25 10:30:12.938201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.299 [2024-07-25 10:30:12.938214] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:09.299 [2024-07-25 10:30:12.938228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:09.299 [2024-07-25 10:30:12.938238] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:09.299 [2024-07-25 10:30:12.938250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:09.299 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:09.299 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:09.299 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:09.299 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:09.299 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:09.299 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:09.299 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.299 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.299 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.299 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.299 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.299 10:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.556 10:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.556 "name": "Existed_Raid", 00:15:09.556 "uuid": "66ac4e3e-f081-4151-bc8f-e89aaf767035", 00:15:09.556 "strip_size_kb": 0, 00:15:09.556 "state": "configuring", 00:15:09.556 "raid_level": "raid1", 00:15:09.556 "superblock": true, 00:15:09.556 "num_base_bdevs": 3, 00:15:09.556 "num_base_bdevs_discovered": 0, 00:15:09.556 "num_base_bdevs_operational": 3, 00:15:09.556 "base_bdevs_list": [ 00:15:09.556 { 00:15:09.556 "name": "BaseBdev1", 00:15:09.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.556 "is_configured": false, 00:15:09.556 "data_offset": 0, 00:15:09.556 "data_size": 0 00:15:09.556 }, 00:15:09.556 { 00:15:09.556 "name": "BaseBdev2", 00:15:09.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.556 "is_configured": false, 00:15:09.556 "data_offset": 0, 00:15:09.556 "data_size": 0 00:15:09.556 }, 00:15:09.556 { 00:15:09.556 "name": "BaseBdev3", 00:15:09.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.557 "is_configured": false, 00:15:09.557 "data_offset": 0, 00:15:09.557 "data_size": 0 00:15:09.557 } 00:15:09.557 ] 00:15:09.557 }' 00:15:09.557 10:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.557 10:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.121 10:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:10.378 [2024-07-25 10:30:13.996814] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.378 [2024-07-25 10:30:13.996846] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x208c620 name Existed_Raid, state configuring 00:15:10.378 10:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:10.636 [2024-07-25 10:30:14.257562] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.636 [2024-07-25 10:30:14.257606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.636 [2024-07-25 10:30:14.257628] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.636 [2024-07-25 10:30:14.257642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.636 [2024-07-25 10:30:14.257652] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:10.636 [2024-07-25 10:30:14.257665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:10.636 10:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:10.894 [2024-07-25 10:30:14.514367] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.894 BaseBdev1 00:15:10.894 10:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:10.894 10:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:10.894 10:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:10.894 10:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:10.894 10:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:10.894 10:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:10.895 10:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:11.152 10:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:11.410 [ 00:15:11.410 { 00:15:11.411 "name": "BaseBdev1", 00:15:11.411 "aliases": [ 00:15:11.411 "3341f588-53e7-4332-b9dd-b428d638a309" 00:15:11.411 ], 00:15:11.411 "product_name": "Malloc disk", 00:15:11.411 "block_size": 512, 00:15:11.411 "num_blocks": 65536, 00:15:11.411 "uuid": "3341f588-53e7-4332-b9dd-b428d638a309", 00:15:11.411 "assigned_rate_limits": { 00:15:11.411 "rw_ios_per_sec": 0, 00:15:11.411 "rw_mbytes_per_sec": 0, 00:15:11.411 "r_mbytes_per_sec": 0, 00:15:11.411 "w_mbytes_per_sec": 0 00:15:11.411 }, 00:15:11.411 "claimed": true, 00:15:11.411 "claim_type": "exclusive_write", 00:15:11.411 "zoned": false, 00:15:11.411 "supported_io_types": { 00:15:11.411 "read": true, 00:15:11.411 "write": true, 00:15:11.411 "unmap": true, 00:15:11.411 "flush": true, 00:15:11.411 "reset": true, 00:15:11.411 "nvme_admin": false, 00:15:11.411 "nvme_io": false, 00:15:11.411 "nvme_io_md": false, 00:15:11.411 "write_zeroes": true, 00:15:11.411 "zcopy": true, 00:15:11.411 "get_zone_info": false, 00:15:11.411 "zone_management": false, 00:15:11.411 "zone_append": false, 00:15:11.411 "compare": false, 00:15:11.411 "compare_and_write": false, 00:15:11.411 "abort": true, 00:15:11.411 "seek_hole": false, 00:15:11.411 "seek_data": false, 00:15:11.411 "copy": true, 00:15:11.411 "nvme_iov_md": false 00:15:11.411 }, 00:15:11.411 "memory_domains": [ 00:15:11.411 { 00:15:11.411 "dma_device_id": "system", 00:15:11.411 "dma_device_type": 1 00:15:11.411 }, 00:15:11.411 { 00:15:11.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.411 "dma_device_type": 2 00:15:11.411 } 00:15:11.411 ], 00:15:11.411 "driver_specific": {} 00:15:11.411 } 00:15:11.411 ] 00:15:11.411 10:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:11.411 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:11.411 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:11.411 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:11.411 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:11.411 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:11.411 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:11.411 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.411 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.411 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.411 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.411 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.411 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.696 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.696 "name": "Existed_Raid", 00:15:11.696 "uuid": "b7f8c671-9b50-4d44-9ec4-3115de0aa947", 00:15:11.696 "strip_size_kb": 0, 00:15:11.696 "state": "configuring", 00:15:11.696 "raid_level": "raid1", 00:15:11.696 "superblock": true, 00:15:11.696 "num_base_bdevs": 3, 00:15:11.696 "num_base_bdevs_discovered": 1, 00:15:11.696 "num_base_bdevs_operational": 3, 00:15:11.696 "base_bdevs_list": [ 00:15:11.696 { 00:15:11.696 "name": "BaseBdev1", 00:15:11.696 "uuid": "3341f588-53e7-4332-b9dd-b428d638a309", 00:15:11.696 "is_configured": true, 00:15:11.696 "data_offset": 2048, 00:15:11.696 "data_size": 63488 00:15:11.696 }, 00:15:11.696 { 00:15:11.696 "name": "BaseBdev2", 00:15:11.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.696 "is_configured": false, 00:15:11.696 "data_offset": 0, 00:15:11.696 "data_size": 0 00:15:11.696 }, 00:15:11.696 { 00:15:11.696 "name": "BaseBdev3", 00:15:11.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.696 "is_configured": false, 00:15:11.696 "data_offset": 0, 00:15:11.696 "data_size": 0 00:15:11.696 } 00:15:11.696 ] 00:15:11.696 }' 00:15:11.696 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.696 10:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.263 10:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:12.521 [2024-07-25 10:30:16.038355] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.521 [2024-07-25 10:30:16.038407] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x208be50 name Existed_Raid, state configuring 00:15:12.521 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:12.779 [2024-07-25 10:30:16.279031] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.779 [2024-07-25 10:30:16.280549] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.779 [2024-07-25 10:30:16.280585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.779 [2024-07-25 10:30:16.280598] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:12.779 [2024-07-25 10:30:16.280611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.779 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.037 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:13.037 "name": "Existed_Raid", 00:15:13.037 "uuid": "f07470db-425f-4e3c-b2d9-938390b301ed", 00:15:13.037 "strip_size_kb": 0, 00:15:13.037 "state": "configuring", 00:15:13.037 "raid_level": "raid1", 00:15:13.037 "superblock": true, 00:15:13.037 "num_base_bdevs": 3, 00:15:13.037 "num_base_bdevs_discovered": 1, 00:15:13.037 "num_base_bdevs_operational": 3, 00:15:13.037 "base_bdevs_list": [ 00:15:13.037 { 00:15:13.037 "name": "BaseBdev1", 00:15:13.037 "uuid": "3341f588-53e7-4332-b9dd-b428d638a309", 00:15:13.037 "is_configured": true, 00:15:13.037 "data_offset": 2048, 00:15:13.037 "data_size": 63488 00:15:13.037 }, 00:15:13.037 { 00:15:13.037 "name": "BaseBdev2", 00:15:13.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.037 "is_configured": false, 00:15:13.037 "data_offset": 0, 00:15:13.037 "data_size": 0 00:15:13.037 }, 00:15:13.037 { 00:15:13.037 "name": "BaseBdev3", 00:15:13.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.037 "is_configured": false, 00:15:13.037 "data_offset": 0, 00:15:13.037 "data_size": 0 00:15:13.037 } 00:15:13.037 ] 00:15:13.037 }' 00:15:13.037 10:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:13.037 10:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.602 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:13.864 [2024-07-25 10:30:17.391582] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.864 BaseBdev2 00:15:13.864 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:13.864 10:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:13.864 10:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:13.864 10:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:13.864 10:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:13.864 10:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:13.864 10:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:14.121 10:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:14.378 [ 00:15:14.378 { 00:15:14.378 "name": "BaseBdev2", 00:15:14.378 "aliases": [ 00:15:14.378 "124a37fc-a482-49e4-8d4a-218549e7889f" 00:15:14.378 ], 00:15:14.378 "product_name": "Malloc disk", 00:15:14.378 "block_size": 512, 00:15:14.378 "num_blocks": 65536, 00:15:14.378 "uuid": "124a37fc-a482-49e4-8d4a-218549e7889f", 00:15:14.378 "assigned_rate_limits": { 00:15:14.378 "rw_ios_per_sec": 0, 00:15:14.378 "rw_mbytes_per_sec": 0, 00:15:14.378 "r_mbytes_per_sec": 0, 00:15:14.378 "w_mbytes_per_sec": 0 00:15:14.378 }, 00:15:14.378 "claimed": true, 00:15:14.378 "claim_type": "exclusive_write", 00:15:14.378 "zoned": false, 00:15:14.378 "supported_io_types": { 00:15:14.378 "read": true, 00:15:14.378 "write": true, 00:15:14.378 "unmap": true, 00:15:14.378 "flush": true, 00:15:14.378 "reset": true, 00:15:14.378 "nvme_admin": false, 00:15:14.378 "nvme_io": false, 00:15:14.378 "nvme_io_md": false, 00:15:14.378 "write_zeroes": true, 00:15:14.378 "zcopy": true, 00:15:14.378 "get_zone_info": false, 00:15:14.378 "zone_management": false, 00:15:14.378 "zone_append": false, 00:15:14.378 "compare": false, 00:15:14.378 "compare_and_write": false, 00:15:14.378 "abort": true, 00:15:14.378 "seek_hole": false, 00:15:14.378 "seek_data": false, 00:15:14.378 "copy": true, 00:15:14.378 "nvme_iov_md": false 00:15:14.378 }, 00:15:14.378 "memory_domains": [ 00:15:14.378 { 00:15:14.378 "dma_device_id": "system", 00:15:14.378 "dma_device_type": 1 00:15:14.378 }, 00:15:14.378 { 00:15:14.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.378 "dma_device_type": 2 00:15:14.378 } 00:15:14.378 ], 00:15:14.378 "driver_specific": {} 00:15:14.378 } 00:15:14.378 ] 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.378 10:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.636 10:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.636 "name": "Existed_Raid", 00:15:14.636 "uuid": "f07470db-425f-4e3c-b2d9-938390b301ed", 00:15:14.636 "strip_size_kb": 0, 00:15:14.636 "state": "configuring", 00:15:14.636 "raid_level": "raid1", 00:15:14.636 "superblock": true, 00:15:14.636 "num_base_bdevs": 3, 00:15:14.636 "num_base_bdevs_discovered": 2, 00:15:14.636 "num_base_bdevs_operational": 3, 00:15:14.636 "base_bdevs_list": [ 00:15:14.636 { 00:15:14.636 "name": "BaseBdev1", 00:15:14.636 "uuid": "3341f588-53e7-4332-b9dd-b428d638a309", 00:15:14.636 "is_configured": true, 00:15:14.636 "data_offset": 2048, 00:15:14.636 "data_size": 63488 00:15:14.636 }, 00:15:14.636 { 00:15:14.636 "name": "BaseBdev2", 00:15:14.636 "uuid": "124a37fc-a482-49e4-8d4a-218549e7889f", 00:15:14.636 "is_configured": true, 00:15:14.636 "data_offset": 2048, 00:15:14.636 "data_size": 63488 00:15:14.636 }, 00:15:14.636 { 00:15:14.636 "name": "BaseBdev3", 00:15:14.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.636 "is_configured": false, 00:15:14.636 "data_offset": 0, 00:15:14.636 "data_size": 0 00:15:14.636 } 00:15:14.636 ] 00:15:14.636 }' 00:15:14.636 10:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.636 10:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.201 10:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:15.459 [2024-07-25 10:30:18.924922] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.459 [2024-07-25 10:30:18.925159] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x208cd90 00:15:15.459 [2024-07-25 10:30:18.925176] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:15.459 [2024-07-25 10:30:18.925335] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2090a90 00:15:15.459 [2024-07-25 10:30:18.925503] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x208cd90 00:15:15.459 [2024-07-25 10:30:18.925517] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x208cd90 00:15:15.459 [2024-07-25 10:30:18.925617] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.459 BaseBdev3 00:15:15.459 10:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:15.459 10:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:15.459 10:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:15.459 10:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:15.459 10:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:15.459 10:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:15.459 10:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:15.718 [ 00:15:15.718 { 00:15:15.718 "name": "BaseBdev3", 00:15:15.718 "aliases": [ 00:15:15.718 "db7b7114-aca5-4090-b8b2-55a1e5ab8fca" 00:15:15.718 ], 00:15:15.718 "product_name": "Malloc disk", 00:15:15.718 "block_size": 512, 00:15:15.718 "num_blocks": 65536, 00:15:15.718 "uuid": "db7b7114-aca5-4090-b8b2-55a1e5ab8fca", 00:15:15.718 "assigned_rate_limits": { 00:15:15.718 "rw_ios_per_sec": 0, 00:15:15.718 "rw_mbytes_per_sec": 0, 00:15:15.718 "r_mbytes_per_sec": 0, 00:15:15.718 "w_mbytes_per_sec": 0 00:15:15.718 }, 00:15:15.718 "claimed": true, 00:15:15.718 "claim_type": "exclusive_write", 00:15:15.718 "zoned": false, 00:15:15.718 "supported_io_types": { 00:15:15.718 "read": true, 00:15:15.718 "write": true, 00:15:15.718 "unmap": true, 00:15:15.718 "flush": true, 00:15:15.718 "reset": true, 00:15:15.718 "nvme_admin": false, 00:15:15.718 "nvme_io": false, 00:15:15.718 "nvme_io_md": false, 00:15:15.718 "write_zeroes": true, 00:15:15.718 "zcopy": true, 00:15:15.718 "get_zone_info": false, 00:15:15.718 "zone_management": false, 00:15:15.718 "zone_append": false, 00:15:15.718 "compare": false, 00:15:15.718 "compare_and_write": false, 00:15:15.718 "abort": true, 00:15:15.718 "seek_hole": false, 00:15:15.718 "seek_data": false, 00:15:15.718 "copy": true, 00:15:15.718 "nvme_iov_md": false 00:15:15.718 }, 00:15:15.718 "memory_domains": [ 00:15:15.718 { 00:15:15.718 "dma_device_id": "system", 00:15:15.718 "dma_device_type": 1 00:15:15.718 }, 00:15:15.718 { 00:15:15.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.718 "dma_device_type": 2 00:15:15.718 } 00:15:15.718 ], 00:15:15.718 "driver_specific": {} 00:15:15.718 } 00:15:15.718 ] 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.718 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.976 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.976 "name": "Existed_Raid", 00:15:15.976 "uuid": "f07470db-425f-4e3c-b2d9-938390b301ed", 00:15:15.976 "strip_size_kb": 0, 00:15:15.976 "state": "online", 00:15:15.976 "raid_level": "raid1", 00:15:15.976 "superblock": true, 00:15:15.976 "num_base_bdevs": 3, 00:15:15.976 "num_base_bdevs_discovered": 3, 00:15:15.976 "num_base_bdevs_operational": 3, 00:15:15.976 "base_bdevs_list": [ 00:15:15.976 { 00:15:15.976 "name": "BaseBdev1", 00:15:15.976 "uuid": "3341f588-53e7-4332-b9dd-b428d638a309", 00:15:15.976 "is_configured": true, 00:15:15.976 "data_offset": 2048, 00:15:15.976 "data_size": 63488 00:15:15.976 }, 00:15:15.976 { 00:15:15.976 "name": "BaseBdev2", 00:15:15.976 "uuid": "124a37fc-a482-49e4-8d4a-218549e7889f", 00:15:15.976 "is_configured": true, 00:15:15.976 "data_offset": 2048, 00:15:15.976 "data_size": 63488 00:15:15.976 }, 00:15:15.976 { 00:15:15.976 "name": "BaseBdev3", 00:15:15.976 "uuid": "db7b7114-aca5-4090-b8b2-55a1e5ab8fca", 00:15:15.976 "is_configured": true, 00:15:15.976 "data_offset": 2048, 00:15:15.976 "data_size": 63488 00:15:15.976 } 00:15:15.976 ] 00:15:15.976 }' 00:15:15.976 10:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.976 10:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:16.910 [2024-07-25 10:30:20.481391] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:16.910 "name": "Existed_Raid", 00:15:16.910 "aliases": [ 00:15:16.910 "f07470db-425f-4e3c-b2d9-938390b301ed" 00:15:16.910 ], 00:15:16.910 "product_name": "Raid Volume", 00:15:16.910 "block_size": 512, 00:15:16.910 "num_blocks": 63488, 00:15:16.910 "uuid": "f07470db-425f-4e3c-b2d9-938390b301ed", 00:15:16.910 "assigned_rate_limits": { 00:15:16.910 "rw_ios_per_sec": 0, 00:15:16.910 "rw_mbytes_per_sec": 0, 00:15:16.910 "r_mbytes_per_sec": 0, 00:15:16.910 "w_mbytes_per_sec": 0 00:15:16.910 }, 00:15:16.910 "claimed": false, 00:15:16.910 "zoned": false, 00:15:16.910 "supported_io_types": { 00:15:16.910 "read": true, 00:15:16.910 "write": true, 00:15:16.910 "unmap": false, 00:15:16.910 "flush": false, 00:15:16.910 "reset": true, 00:15:16.910 "nvme_admin": false, 00:15:16.910 "nvme_io": false, 00:15:16.910 "nvme_io_md": false, 00:15:16.910 "write_zeroes": true, 00:15:16.910 "zcopy": false, 00:15:16.910 "get_zone_info": false, 00:15:16.910 "zone_management": false, 00:15:16.910 "zone_append": false, 00:15:16.910 "compare": false, 00:15:16.910 "compare_and_write": false, 00:15:16.910 "abort": false, 00:15:16.910 "seek_hole": false, 00:15:16.910 "seek_data": false, 00:15:16.910 "copy": false, 00:15:16.910 "nvme_iov_md": false 00:15:16.910 }, 00:15:16.910 "memory_domains": [ 00:15:16.910 { 00:15:16.910 "dma_device_id": "system", 00:15:16.910 "dma_device_type": 1 00:15:16.910 }, 00:15:16.910 { 00:15:16.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.910 "dma_device_type": 2 00:15:16.910 }, 00:15:16.910 { 00:15:16.910 "dma_device_id": "system", 00:15:16.910 "dma_device_type": 1 00:15:16.910 }, 00:15:16.910 { 00:15:16.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.910 "dma_device_type": 2 00:15:16.910 }, 00:15:16.910 { 00:15:16.910 "dma_device_id": "system", 00:15:16.910 "dma_device_type": 1 00:15:16.910 }, 00:15:16.910 { 00:15:16.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.910 "dma_device_type": 2 00:15:16.910 } 00:15:16.910 ], 00:15:16.910 "driver_specific": { 00:15:16.910 "raid": { 00:15:16.910 "uuid": "f07470db-425f-4e3c-b2d9-938390b301ed", 00:15:16.910 "strip_size_kb": 0, 00:15:16.910 "state": "online", 00:15:16.910 "raid_level": "raid1", 00:15:16.910 "superblock": true, 00:15:16.910 "num_base_bdevs": 3, 00:15:16.910 "num_base_bdevs_discovered": 3, 00:15:16.910 "num_base_bdevs_operational": 3, 00:15:16.910 "base_bdevs_list": [ 00:15:16.910 { 00:15:16.910 "name": "BaseBdev1", 00:15:16.910 "uuid": "3341f588-53e7-4332-b9dd-b428d638a309", 00:15:16.910 "is_configured": true, 00:15:16.910 "data_offset": 2048, 00:15:16.910 "data_size": 63488 00:15:16.910 }, 00:15:16.910 { 00:15:16.910 "name": "BaseBdev2", 00:15:16.910 "uuid": "124a37fc-a482-49e4-8d4a-218549e7889f", 00:15:16.910 "is_configured": true, 00:15:16.910 "data_offset": 2048, 00:15:16.910 "data_size": 63488 00:15:16.910 }, 00:15:16.910 { 00:15:16.910 "name": "BaseBdev3", 00:15:16.910 "uuid": "db7b7114-aca5-4090-b8b2-55a1e5ab8fca", 00:15:16.910 "is_configured": true, 00:15:16.910 "data_offset": 2048, 00:15:16.910 "data_size": 63488 00:15:16.910 } 00:15:16.910 ] 00:15:16.910 } 00:15:16.910 } 00:15:16.910 }' 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:16.910 BaseBdev2 00:15:16.910 BaseBdev3' 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:16.910 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:17.169 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:17.169 "name": "BaseBdev1", 00:15:17.169 "aliases": [ 00:15:17.169 "3341f588-53e7-4332-b9dd-b428d638a309" 00:15:17.169 ], 00:15:17.169 "product_name": "Malloc disk", 00:15:17.169 "block_size": 512, 00:15:17.169 "num_blocks": 65536, 00:15:17.169 "uuid": "3341f588-53e7-4332-b9dd-b428d638a309", 00:15:17.169 "assigned_rate_limits": { 00:15:17.169 "rw_ios_per_sec": 0, 00:15:17.169 "rw_mbytes_per_sec": 0, 00:15:17.169 "r_mbytes_per_sec": 0, 00:15:17.169 "w_mbytes_per_sec": 0 00:15:17.169 }, 00:15:17.169 "claimed": true, 00:15:17.169 "claim_type": "exclusive_write", 00:15:17.169 "zoned": false, 00:15:17.169 "supported_io_types": { 00:15:17.169 "read": true, 00:15:17.169 "write": true, 00:15:17.169 "unmap": true, 00:15:17.169 "flush": true, 00:15:17.169 "reset": true, 00:15:17.169 "nvme_admin": false, 00:15:17.169 "nvme_io": false, 00:15:17.169 "nvme_io_md": false, 00:15:17.169 "write_zeroes": true, 00:15:17.169 "zcopy": true, 00:15:17.169 "get_zone_info": false, 00:15:17.169 "zone_management": false, 00:15:17.169 "zone_append": false, 00:15:17.169 "compare": false, 00:15:17.169 "compare_and_write": false, 00:15:17.169 "abort": true, 00:15:17.169 "seek_hole": false, 00:15:17.169 "seek_data": false, 00:15:17.169 "copy": true, 00:15:17.169 "nvme_iov_md": false 00:15:17.169 }, 00:15:17.169 "memory_domains": [ 00:15:17.169 { 00:15:17.169 "dma_device_id": "system", 00:15:17.169 "dma_device_type": 1 00:15:17.169 }, 00:15:17.169 { 00:15:17.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.169 "dma_device_type": 2 00:15:17.169 } 00:15:17.169 ], 00:15:17.169 "driver_specific": {} 00:15:17.169 }' 00:15:17.169 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.169 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.169 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:17.169 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.427 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.427 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:17.427 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.427 10:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.427 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:17.427 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.427 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.427 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:17.427 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:17.427 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:17.427 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:17.685 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:17.685 "name": "BaseBdev2", 00:15:17.685 "aliases": [ 00:15:17.685 "124a37fc-a482-49e4-8d4a-218549e7889f" 00:15:17.685 ], 00:15:17.685 "product_name": "Malloc disk", 00:15:17.685 "block_size": 512, 00:15:17.685 "num_blocks": 65536, 00:15:17.685 "uuid": "124a37fc-a482-49e4-8d4a-218549e7889f", 00:15:17.685 "assigned_rate_limits": { 00:15:17.685 "rw_ios_per_sec": 0, 00:15:17.685 "rw_mbytes_per_sec": 0, 00:15:17.685 "r_mbytes_per_sec": 0, 00:15:17.685 "w_mbytes_per_sec": 0 00:15:17.685 }, 00:15:17.685 "claimed": true, 00:15:17.685 "claim_type": "exclusive_write", 00:15:17.685 "zoned": false, 00:15:17.685 "supported_io_types": { 00:15:17.685 "read": true, 00:15:17.685 "write": true, 00:15:17.685 "unmap": true, 00:15:17.685 "flush": true, 00:15:17.685 "reset": true, 00:15:17.685 "nvme_admin": false, 00:15:17.685 "nvme_io": false, 00:15:17.685 "nvme_io_md": false, 00:15:17.685 "write_zeroes": true, 00:15:17.685 "zcopy": true, 00:15:17.685 "get_zone_info": false, 00:15:17.685 "zone_management": false, 00:15:17.685 "zone_append": false, 00:15:17.685 "compare": false, 00:15:17.685 "compare_and_write": false, 00:15:17.685 "abort": true, 00:15:17.685 "seek_hole": false, 00:15:17.685 "seek_data": false, 00:15:17.685 "copy": true, 00:15:17.685 "nvme_iov_md": false 00:15:17.685 }, 00:15:17.685 "memory_domains": [ 00:15:17.685 { 00:15:17.685 "dma_device_id": "system", 00:15:17.685 "dma_device_type": 1 00:15:17.685 }, 00:15:17.685 { 00:15:17.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.685 "dma_device_type": 2 00:15:17.685 } 00:15:17.685 ], 00:15:17.685 "driver_specific": {} 00:15:17.685 }' 00:15:17.685 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.685 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.943 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:17.943 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.943 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.943 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:17.943 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.943 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.943 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:17.943 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.943 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.943 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:17.943 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:17.943 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:17.943 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:18.201 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:18.201 "name": "BaseBdev3", 00:15:18.201 "aliases": [ 00:15:18.201 "db7b7114-aca5-4090-b8b2-55a1e5ab8fca" 00:15:18.201 ], 00:15:18.201 "product_name": "Malloc disk", 00:15:18.201 "block_size": 512, 00:15:18.201 "num_blocks": 65536, 00:15:18.201 "uuid": "db7b7114-aca5-4090-b8b2-55a1e5ab8fca", 00:15:18.201 "assigned_rate_limits": { 00:15:18.201 "rw_ios_per_sec": 0, 00:15:18.201 "rw_mbytes_per_sec": 0, 00:15:18.201 "r_mbytes_per_sec": 0, 00:15:18.201 "w_mbytes_per_sec": 0 00:15:18.201 }, 00:15:18.201 "claimed": true, 00:15:18.201 "claim_type": "exclusive_write", 00:15:18.201 "zoned": false, 00:15:18.201 "supported_io_types": { 00:15:18.201 "read": true, 00:15:18.201 "write": true, 00:15:18.201 "unmap": true, 00:15:18.201 "flush": true, 00:15:18.201 "reset": true, 00:15:18.201 "nvme_admin": false, 00:15:18.201 "nvme_io": false, 00:15:18.201 "nvme_io_md": false, 00:15:18.201 "write_zeroes": true, 00:15:18.201 "zcopy": true, 00:15:18.201 "get_zone_info": false, 00:15:18.201 "zone_management": false, 00:15:18.201 "zone_append": false, 00:15:18.201 "compare": false, 00:15:18.201 "compare_and_write": false, 00:15:18.201 "abort": true, 00:15:18.201 "seek_hole": false, 00:15:18.201 "seek_data": false, 00:15:18.201 "copy": true, 00:15:18.201 "nvme_iov_md": false 00:15:18.201 }, 00:15:18.201 "memory_domains": [ 00:15:18.201 { 00:15:18.201 "dma_device_id": "system", 00:15:18.201 "dma_device_type": 1 00:15:18.201 }, 00:15:18.201 { 00:15:18.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.201 "dma_device_type": 2 00:15:18.201 } 00:15:18.201 ], 00:15:18.201 "driver_specific": {} 00:15:18.201 }' 00:15:18.201 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:18.201 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:18.460 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:18.460 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:18.460 10:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:18.460 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:18.460 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:18.460 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:18.460 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:18.460 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:18.460 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:18.719 [2024-07-25 10:30:22.398232] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.719 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.977 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.977 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.977 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.977 "name": "Existed_Raid", 00:15:18.977 "uuid": "f07470db-425f-4e3c-b2d9-938390b301ed", 00:15:18.977 "strip_size_kb": 0, 00:15:18.977 "state": "online", 00:15:18.977 "raid_level": "raid1", 00:15:18.977 "superblock": true, 00:15:18.977 "num_base_bdevs": 3, 00:15:18.977 "num_base_bdevs_discovered": 2, 00:15:18.977 "num_base_bdevs_operational": 2, 00:15:18.977 "base_bdevs_list": [ 00:15:18.977 { 00:15:18.977 "name": null, 00:15:18.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.977 "is_configured": false, 00:15:18.977 "data_offset": 2048, 00:15:18.977 "data_size": 63488 00:15:18.977 }, 00:15:18.977 { 00:15:18.977 "name": "BaseBdev2", 00:15:18.977 "uuid": "124a37fc-a482-49e4-8d4a-218549e7889f", 00:15:18.977 "is_configured": true, 00:15:18.977 "data_offset": 2048, 00:15:18.977 "data_size": 63488 00:15:18.977 }, 00:15:18.977 { 00:15:18.977 "name": "BaseBdev3", 00:15:18.977 "uuid": "db7b7114-aca5-4090-b8b2-55a1e5ab8fca", 00:15:18.977 "is_configured": true, 00:15:18.977 "data_offset": 2048, 00:15:18.977 "data_size": 63488 00:15:18.977 } 00:15:18.977 ] 00:15:18.977 }' 00:15:18.977 10:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.977 10:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.543 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:19.543 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:19.543 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.543 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:19.802 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:19.802 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:19.802 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:20.059 [2024-07-25 10:30:23.703949] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:20.059 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:20.059 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:20.059 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.059 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:20.317 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:20.317 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:20.317 10:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:20.574 [2024-07-25 10:30:24.196815] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:20.574 [2024-07-25 10:30:24.196929] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.574 [2024-07-25 10:30:24.210496] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.574 [2024-07-25 10:30:24.210566] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.574 [2024-07-25 10:30:24.210578] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x208cd90 name Existed_Raid, state offline 00:15:20.574 10:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:20.574 10:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:20.574 10:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.574 10:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:20.832 10:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:20.832 10:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:20.832 10:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:15:20.832 10:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:20.832 10:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:20.832 10:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:21.090 BaseBdev2 00:15:21.090 10:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:21.090 10:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:21.090 10:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:21.090 10:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:21.090 10:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:21.090 10:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:21.090 10:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:21.348 10:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:21.606 [ 00:15:21.606 { 00:15:21.606 "name": "BaseBdev2", 00:15:21.606 "aliases": [ 00:15:21.606 "593e1a83-39a1-4fa7-b7be-2de88403b7b9" 00:15:21.606 ], 00:15:21.606 "product_name": "Malloc disk", 00:15:21.606 "block_size": 512, 00:15:21.606 "num_blocks": 65536, 00:15:21.606 "uuid": "593e1a83-39a1-4fa7-b7be-2de88403b7b9", 00:15:21.606 "assigned_rate_limits": { 00:15:21.606 "rw_ios_per_sec": 0, 00:15:21.606 "rw_mbytes_per_sec": 0, 00:15:21.606 "r_mbytes_per_sec": 0, 00:15:21.606 "w_mbytes_per_sec": 0 00:15:21.606 }, 00:15:21.606 "claimed": false, 00:15:21.606 "zoned": false, 00:15:21.606 "supported_io_types": { 00:15:21.606 "read": true, 00:15:21.606 "write": true, 00:15:21.606 "unmap": true, 00:15:21.606 "flush": true, 00:15:21.606 "reset": true, 00:15:21.606 "nvme_admin": false, 00:15:21.606 "nvme_io": false, 00:15:21.606 "nvme_io_md": false, 00:15:21.606 "write_zeroes": true, 00:15:21.606 "zcopy": true, 00:15:21.606 "get_zone_info": false, 00:15:21.606 "zone_management": false, 00:15:21.606 "zone_append": false, 00:15:21.606 "compare": false, 00:15:21.606 "compare_and_write": false, 00:15:21.606 "abort": true, 00:15:21.606 "seek_hole": false, 00:15:21.606 "seek_data": false, 00:15:21.606 "copy": true, 00:15:21.606 "nvme_iov_md": false 00:15:21.606 }, 00:15:21.606 "memory_domains": [ 00:15:21.606 { 00:15:21.606 "dma_device_id": "system", 00:15:21.606 "dma_device_type": 1 00:15:21.606 }, 00:15:21.606 { 00:15:21.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.606 "dma_device_type": 2 00:15:21.606 } 00:15:21.606 ], 00:15:21.606 "driver_specific": {} 00:15:21.606 } 00:15:21.606 ] 00:15:21.606 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:21.606 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:21.606 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:21.606 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:21.864 BaseBdev3 00:15:21.864 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:21.864 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:21.864 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:21.864 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:21.864 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:21.864 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:21.864 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:22.122 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:22.379 [ 00:15:22.380 { 00:15:22.380 "name": "BaseBdev3", 00:15:22.380 "aliases": [ 00:15:22.380 "1472a18b-ca17-4605-ae4a-4e1424a702a3" 00:15:22.380 ], 00:15:22.380 "product_name": "Malloc disk", 00:15:22.380 "block_size": 512, 00:15:22.380 "num_blocks": 65536, 00:15:22.380 "uuid": "1472a18b-ca17-4605-ae4a-4e1424a702a3", 00:15:22.380 "assigned_rate_limits": { 00:15:22.380 "rw_ios_per_sec": 0, 00:15:22.380 "rw_mbytes_per_sec": 0, 00:15:22.380 "r_mbytes_per_sec": 0, 00:15:22.380 "w_mbytes_per_sec": 0 00:15:22.380 }, 00:15:22.380 "claimed": false, 00:15:22.380 "zoned": false, 00:15:22.380 "supported_io_types": { 00:15:22.380 "read": true, 00:15:22.380 "write": true, 00:15:22.380 "unmap": true, 00:15:22.380 "flush": true, 00:15:22.380 "reset": true, 00:15:22.380 "nvme_admin": false, 00:15:22.380 "nvme_io": false, 00:15:22.380 "nvme_io_md": false, 00:15:22.380 "write_zeroes": true, 00:15:22.380 "zcopy": true, 00:15:22.380 "get_zone_info": false, 00:15:22.380 "zone_management": false, 00:15:22.380 "zone_append": false, 00:15:22.380 "compare": false, 00:15:22.380 "compare_and_write": false, 00:15:22.380 "abort": true, 00:15:22.380 "seek_hole": false, 00:15:22.380 "seek_data": false, 00:15:22.380 "copy": true, 00:15:22.380 "nvme_iov_md": false 00:15:22.380 }, 00:15:22.380 "memory_domains": [ 00:15:22.380 { 00:15:22.380 "dma_device_id": "system", 00:15:22.380 "dma_device_type": 1 00:15:22.380 }, 00:15:22.380 { 00:15:22.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.380 "dma_device_type": 2 00:15:22.380 } 00:15:22.380 ], 00:15:22.380 "driver_specific": {} 00:15:22.380 } 00:15:22.380 ] 00:15:22.380 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:22.380 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:22.380 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:22.380 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:22.637 [2024-07-25 10:30:26.145599] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.637 [2024-07-25 10:30:26.145638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.637 [2024-07-25 10:30:26.145662] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.637 [2024-07-25 10:30:26.146969] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:22.637 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:22.637 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.637 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:22.637 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:22.638 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:22.638 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:22.638 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.638 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.638 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.638 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.638 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.638 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.895 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.895 "name": "Existed_Raid", 00:15:22.895 "uuid": "209906ec-c4c7-489b-a6d8-90d64e00a84f", 00:15:22.895 "strip_size_kb": 0, 00:15:22.895 "state": "configuring", 00:15:22.895 "raid_level": "raid1", 00:15:22.895 "superblock": true, 00:15:22.895 "num_base_bdevs": 3, 00:15:22.895 "num_base_bdevs_discovered": 2, 00:15:22.895 "num_base_bdevs_operational": 3, 00:15:22.895 "base_bdevs_list": [ 00:15:22.895 { 00:15:22.895 "name": "BaseBdev1", 00:15:22.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.895 "is_configured": false, 00:15:22.895 "data_offset": 0, 00:15:22.895 "data_size": 0 00:15:22.895 }, 00:15:22.895 { 00:15:22.895 "name": "BaseBdev2", 00:15:22.895 "uuid": "593e1a83-39a1-4fa7-b7be-2de88403b7b9", 00:15:22.895 "is_configured": true, 00:15:22.895 "data_offset": 2048, 00:15:22.895 "data_size": 63488 00:15:22.895 }, 00:15:22.895 { 00:15:22.895 "name": "BaseBdev3", 00:15:22.895 "uuid": "1472a18b-ca17-4605-ae4a-4e1424a702a3", 00:15:22.895 "is_configured": true, 00:15:22.895 "data_offset": 2048, 00:15:22.895 "data_size": 63488 00:15:22.895 } 00:15:22.895 ] 00:15:22.895 }' 00:15:22.895 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.895 10:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.460 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:23.717 [2024-07-25 10:30:27.176384] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:23.717 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:23.717 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:23.717 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:23.717 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:23.717 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:23.717 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:23.717 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.717 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.717 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.717 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.717 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.717 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.975 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.975 "name": "Existed_Raid", 00:15:23.975 "uuid": "209906ec-c4c7-489b-a6d8-90d64e00a84f", 00:15:23.975 "strip_size_kb": 0, 00:15:23.975 "state": "configuring", 00:15:23.975 "raid_level": "raid1", 00:15:23.975 "superblock": true, 00:15:23.975 "num_base_bdevs": 3, 00:15:23.975 "num_base_bdevs_discovered": 1, 00:15:23.975 "num_base_bdevs_operational": 3, 00:15:23.975 "base_bdevs_list": [ 00:15:23.975 { 00:15:23.975 "name": "BaseBdev1", 00:15:23.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.975 "is_configured": false, 00:15:23.975 "data_offset": 0, 00:15:23.975 "data_size": 0 00:15:23.975 }, 00:15:23.975 { 00:15:23.975 "name": null, 00:15:23.975 "uuid": "593e1a83-39a1-4fa7-b7be-2de88403b7b9", 00:15:23.975 "is_configured": false, 00:15:23.975 "data_offset": 2048, 00:15:23.975 "data_size": 63488 00:15:23.975 }, 00:15:23.975 { 00:15:23.975 "name": "BaseBdev3", 00:15:23.975 "uuid": "1472a18b-ca17-4605-ae4a-4e1424a702a3", 00:15:23.975 "is_configured": true, 00:15:23.975 "data_offset": 2048, 00:15:23.975 "data_size": 63488 00:15:23.975 } 00:15:23.975 ] 00:15:23.975 }' 00:15:23.975 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.975 10:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.539 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.539 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:24.539 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:24.539 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:24.797 [2024-07-25 10:30:28.466434] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.797 BaseBdev1 00:15:24.797 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:24.797 10:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:24.797 10:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:24.797 10:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:24.797 10:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:24.797 10:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:24.797 10:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:25.054 10:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.311 [ 00:15:25.311 { 00:15:25.311 "name": "BaseBdev1", 00:15:25.311 "aliases": [ 00:15:25.311 "9f226365-18fb-4558-b077-e97be668cd27" 00:15:25.311 ], 00:15:25.311 "product_name": "Malloc disk", 00:15:25.311 "block_size": 512, 00:15:25.311 "num_blocks": 65536, 00:15:25.311 "uuid": "9f226365-18fb-4558-b077-e97be668cd27", 00:15:25.311 "assigned_rate_limits": { 00:15:25.311 "rw_ios_per_sec": 0, 00:15:25.311 "rw_mbytes_per_sec": 0, 00:15:25.311 "r_mbytes_per_sec": 0, 00:15:25.311 "w_mbytes_per_sec": 0 00:15:25.311 }, 00:15:25.311 "claimed": true, 00:15:25.311 "claim_type": "exclusive_write", 00:15:25.311 "zoned": false, 00:15:25.311 "supported_io_types": { 00:15:25.311 "read": true, 00:15:25.311 "write": true, 00:15:25.311 "unmap": true, 00:15:25.311 "flush": true, 00:15:25.311 "reset": true, 00:15:25.311 "nvme_admin": false, 00:15:25.311 "nvme_io": false, 00:15:25.312 "nvme_io_md": false, 00:15:25.312 "write_zeroes": true, 00:15:25.312 "zcopy": true, 00:15:25.312 "get_zone_info": false, 00:15:25.312 "zone_management": false, 00:15:25.312 "zone_append": false, 00:15:25.312 "compare": false, 00:15:25.312 "compare_and_write": false, 00:15:25.312 "abort": true, 00:15:25.312 "seek_hole": false, 00:15:25.312 "seek_data": false, 00:15:25.312 "copy": true, 00:15:25.312 "nvme_iov_md": false 00:15:25.312 }, 00:15:25.312 "memory_domains": [ 00:15:25.312 { 00:15:25.312 "dma_device_id": "system", 00:15:25.312 "dma_device_type": 1 00:15:25.312 }, 00:15:25.312 { 00:15:25.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.312 "dma_device_type": 2 00:15:25.312 } 00:15:25.312 ], 00:15:25.312 "driver_specific": {} 00:15:25.312 } 00:15:25.312 ] 00:15:25.312 10:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:25.312 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:25.312 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.312 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:25.312 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:25.312 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:25.312 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:25.312 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.312 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.312 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.312 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.312 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.312 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.569 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.569 "name": "Existed_Raid", 00:15:25.569 "uuid": "209906ec-c4c7-489b-a6d8-90d64e00a84f", 00:15:25.569 "strip_size_kb": 0, 00:15:25.569 "state": "configuring", 00:15:25.569 "raid_level": "raid1", 00:15:25.569 "superblock": true, 00:15:25.569 "num_base_bdevs": 3, 00:15:25.569 "num_base_bdevs_discovered": 2, 00:15:25.569 "num_base_bdevs_operational": 3, 00:15:25.569 "base_bdevs_list": [ 00:15:25.569 { 00:15:25.569 "name": "BaseBdev1", 00:15:25.569 "uuid": "9f226365-18fb-4558-b077-e97be668cd27", 00:15:25.569 "is_configured": true, 00:15:25.569 "data_offset": 2048, 00:15:25.569 "data_size": 63488 00:15:25.569 }, 00:15:25.569 { 00:15:25.569 "name": null, 00:15:25.569 "uuid": "593e1a83-39a1-4fa7-b7be-2de88403b7b9", 00:15:25.569 "is_configured": false, 00:15:25.569 "data_offset": 2048, 00:15:25.569 "data_size": 63488 00:15:25.569 }, 00:15:25.569 { 00:15:25.569 "name": "BaseBdev3", 00:15:25.569 "uuid": "1472a18b-ca17-4605-ae4a-4e1424a702a3", 00:15:25.569 "is_configured": true, 00:15:25.569 "data_offset": 2048, 00:15:25.569 "data_size": 63488 00:15:25.569 } 00:15:25.569 ] 00:15:25.569 }' 00:15:25.569 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.569 10:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.133 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.133 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:26.391 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:26.391 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:26.649 [2024-07-25 10:30:30.251245] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:26.649 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:26.649 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.649 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:26.649 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:26.649 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:26.649 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:26.649 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.649 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.649 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.649 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.649 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.649 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.907 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.907 "name": "Existed_Raid", 00:15:26.907 "uuid": "209906ec-c4c7-489b-a6d8-90d64e00a84f", 00:15:26.907 "strip_size_kb": 0, 00:15:26.907 "state": "configuring", 00:15:26.907 "raid_level": "raid1", 00:15:26.907 "superblock": true, 00:15:26.907 "num_base_bdevs": 3, 00:15:26.907 "num_base_bdevs_discovered": 1, 00:15:26.907 "num_base_bdevs_operational": 3, 00:15:26.907 "base_bdevs_list": [ 00:15:26.907 { 00:15:26.907 "name": "BaseBdev1", 00:15:26.907 "uuid": "9f226365-18fb-4558-b077-e97be668cd27", 00:15:26.907 "is_configured": true, 00:15:26.907 "data_offset": 2048, 00:15:26.907 "data_size": 63488 00:15:26.907 }, 00:15:26.907 { 00:15:26.907 "name": null, 00:15:26.907 "uuid": "593e1a83-39a1-4fa7-b7be-2de88403b7b9", 00:15:26.907 "is_configured": false, 00:15:26.907 "data_offset": 2048, 00:15:26.907 "data_size": 63488 00:15:26.907 }, 00:15:26.907 { 00:15:26.907 "name": null, 00:15:26.907 "uuid": "1472a18b-ca17-4605-ae4a-4e1424a702a3", 00:15:26.907 "is_configured": false, 00:15:26.907 "data_offset": 2048, 00:15:26.907 "data_size": 63488 00:15:26.907 } 00:15:26.907 ] 00:15:26.907 }' 00:15:26.907 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.907 10:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.473 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.473 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:27.730 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:27.731 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:27.988 [2024-07-25 10:30:31.554750] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.989 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:27.989 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.989 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:27.989 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:27.989 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:27.989 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:27.989 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.989 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.989 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.989 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.989 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.989 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.273 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.273 "name": "Existed_Raid", 00:15:28.273 "uuid": "209906ec-c4c7-489b-a6d8-90d64e00a84f", 00:15:28.273 "strip_size_kb": 0, 00:15:28.273 "state": "configuring", 00:15:28.273 "raid_level": "raid1", 00:15:28.273 "superblock": true, 00:15:28.273 "num_base_bdevs": 3, 00:15:28.273 "num_base_bdevs_discovered": 2, 00:15:28.273 "num_base_bdevs_operational": 3, 00:15:28.273 "base_bdevs_list": [ 00:15:28.273 { 00:15:28.273 "name": "BaseBdev1", 00:15:28.273 "uuid": "9f226365-18fb-4558-b077-e97be668cd27", 00:15:28.273 "is_configured": true, 00:15:28.273 "data_offset": 2048, 00:15:28.273 "data_size": 63488 00:15:28.273 }, 00:15:28.273 { 00:15:28.273 "name": null, 00:15:28.273 "uuid": "593e1a83-39a1-4fa7-b7be-2de88403b7b9", 00:15:28.273 "is_configured": false, 00:15:28.273 "data_offset": 2048, 00:15:28.273 "data_size": 63488 00:15:28.273 }, 00:15:28.273 { 00:15:28.273 "name": "BaseBdev3", 00:15:28.273 "uuid": "1472a18b-ca17-4605-ae4a-4e1424a702a3", 00:15:28.273 "is_configured": true, 00:15:28.273 "data_offset": 2048, 00:15:28.273 "data_size": 63488 00:15:28.273 } 00:15:28.273 ] 00:15:28.273 }' 00:15:28.273 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.273 10:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.838 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.838 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:29.095 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:29.095 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:29.353 [2024-07-25 10:30:32.810069] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.353 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:29.353 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:29.353 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:29.353 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:29.353 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:29.353 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:29.353 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.353 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.353 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.353 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.353 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.353 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.611 10:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.611 "name": "Existed_Raid", 00:15:29.611 "uuid": "209906ec-c4c7-489b-a6d8-90d64e00a84f", 00:15:29.611 "strip_size_kb": 0, 00:15:29.611 "state": "configuring", 00:15:29.611 "raid_level": "raid1", 00:15:29.611 "superblock": true, 00:15:29.611 "num_base_bdevs": 3, 00:15:29.611 "num_base_bdevs_discovered": 1, 00:15:29.611 "num_base_bdevs_operational": 3, 00:15:29.611 "base_bdevs_list": [ 00:15:29.611 { 00:15:29.611 "name": null, 00:15:29.611 "uuid": "9f226365-18fb-4558-b077-e97be668cd27", 00:15:29.611 "is_configured": false, 00:15:29.611 "data_offset": 2048, 00:15:29.611 "data_size": 63488 00:15:29.611 }, 00:15:29.611 { 00:15:29.611 "name": null, 00:15:29.611 "uuid": "593e1a83-39a1-4fa7-b7be-2de88403b7b9", 00:15:29.611 "is_configured": false, 00:15:29.611 "data_offset": 2048, 00:15:29.611 "data_size": 63488 00:15:29.611 }, 00:15:29.611 { 00:15:29.611 "name": "BaseBdev3", 00:15:29.611 "uuid": "1472a18b-ca17-4605-ae4a-4e1424a702a3", 00:15:29.611 "is_configured": true, 00:15:29.611 "data_offset": 2048, 00:15:29.611 "data_size": 63488 00:15:29.611 } 00:15:29.611 ] 00:15:29.611 }' 00:15:29.611 10:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.611 10:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.176 10:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.176 10:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.176 10:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:30.176 10:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:30.434 [2024-07-25 10:30:34.089280] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.434 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:30.434 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:30.434 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:30.434 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:30.434 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:30.434 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:30.434 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:30.434 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:30.434 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:30.434 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:30.434 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.434 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.692 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.692 "name": "Existed_Raid", 00:15:30.692 "uuid": "209906ec-c4c7-489b-a6d8-90d64e00a84f", 00:15:30.692 "strip_size_kb": 0, 00:15:30.692 "state": "configuring", 00:15:30.692 "raid_level": "raid1", 00:15:30.692 "superblock": true, 00:15:30.692 "num_base_bdevs": 3, 00:15:30.692 "num_base_bdevs_discovered": 2, 00:15:30.692 "num_base_bdevs_operational": 3, 00:15:30.692 "base_bdevs_list": [ 00:15:30.692 { 00:15:30.692 "name": null, 00:15:30.692 "uuid": "9f226365-18fb-4558-b077-e97be668cd27", 00:15:30.692 "is_configured": false, 00:15:30.692 "data_offset": 2048, 00:15:30.692 "data_size": 63488 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "name": "BaseBdev2", 00:15:30.692 "uuid": "593e1a83-39a1-4fa7-b7be-2de88403b7b9", 00:15:30.692 "is_configured": true, 00:15:30.692 "data_offset": 2048, 00:15:30.692 "data_size": 63488 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "name": "BaseBdev3", 00:15:30.692 "uuid": "1472a18b-ca17-4605-ae4a-4e1424a702a3", 00:15:30.692 "is_configured": true, 00:15:30.692 "data_offset": 2048, 00:15:30.692 "data_size": 63488 00:15:30.692 } 00:15:30.692 ] 00:15:30.692 }' 00:15:30.692 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.692 10:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.258 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.258 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:31.516 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:31.516 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.516 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:31.773 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9f226365-18fb-4558-b077-e97be668cd27 00:15:32.031 [2024-07-25 10:30:35.638945] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:32.031 [2024-07-25 10:30:35.639192] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x208b220 00:15:32.031 [2024-07-25 10:30:35.639208] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:32.031 [2024-07-25 10:30:35.639357] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x208cd20 00:15:32.031 [2024-07-25 10:30:35.639492] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x208b220 00:15:32.031 [2024-07-25 10:30:35.639513] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x208b220 00:15:32.031 [2024-07-25 10:30:35.639598] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.031 NewBaseBdev 00:15:32.031 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:32.031 10:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:32.031 10:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:32.031 10:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:32.031 10:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:32.031 10:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:32.031 10:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:32.289 10:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:32.546 [ 00:15:32.546 { 00:15:32.546 "name": "NewBaseBdev", 00:15:32.546 "aliases": [ 00:15:32.546 "9f226365-18fb-4558-b077-e97be668cd27" 00:15:32.546 ], 00:15:32.546 "product_name": "Malloc disk", 00:15:32.546 "block_size": 512, 00:15:32.546 "num_blocks": 65536, 00:15:32.546 "uuid": "9f226365-18fb-4558-b077-e97be668cd27", 00:15:32.546 "assigned_rate_limits": { 00:15:32.546 "rw_ios_per_sec": 0, 00:15:32.546 "rw_mbytes_per_sec": 0, 00:15:32.546 "r_mbytes_per_sec": 0, 00:15:32.546 "w_mbytes_per_sec": 0 00:15:32.546 }, 00:15:32.546 "claimed": true, 00:15:32.546 "claim_type": "exclusive_write", 00:15:32.546 "zoned": false, 00:15:32.546 "supported_io_types": { 00:15:32.546 "read": true, 00:15:32.546 "write": true, 00:15:32.546 "unmap": true, 00:15:32.546 "flush": true, 00:15:32.546 "reset": true, 00:15:32.546 "nvme_admin": false, 00:15:32.546 "nvme_io": false, 00:15:32.546 "nvme_io_md": false, 00:15:32.546 "write_zeroes": true, 00:15:32.546 "zcopy": true, 00:15:32.546 "get_zone_info": false, 00:15:32.546 "zone_management": false, 00:15:32.546 "zone_append": false, 00:15:32.546 "compare": false, 00:15:32.546 "compare_and_write": false, 00:15:32.546 "abort": true, 00:15:32.546 "seek_hole": false, 00:15:32.546 "seek_data": false, 00:15:32.546 "copy": true, 00:15:32.546 "nvme_iov_md": false 00:15:32.546 }, 00:15:32.546 "memory_domains": [ 00:15:32.546 { 00:15:32.546 "dma_device_id": "system", 00:15:32.546 "dma_device_type": 1 00:15:32.546 }, 00:15:32.546 { 00:15:32.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.546 "dma_device_type": 2 00:15:32.546 } 00:15:32.546 ], 00:15:32.546 "driver_specific": {} 00:15:32.546 } 00:15:32.546 ] 00:15:32.546 10:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:32.546 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:32.546 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:32.546 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:32.546 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:32.546 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:32.546 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:32.546 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:32.546 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:32.546 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:32.546 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:32.546 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.546 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.805 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.805 "name": "Existed_Raid", 00:15:32.805 "uuid": "209906ec-c4c7-489b-a6d8-90d64e00a84f", 00:15:32.805 "strip_size_kb": 0, 00:15:32.805 "state": "online", 00:15:32.805 "raid_level": "raid1", 00:15:32.805 "superblock": true, 00:15:32.805 "num_base_bdevs": 3, 00:15:32.805 "num_base_bdevs_discovered": 3, 00:15:32.805 "num_base_bdevs_operational": 3, 00:15:32.805 "base_bdevs_list": [ 00:15:32.805 { 00:15:32.805 "name": "NewBaseBdev", 00:15:32.805 "uuid": "9f226365-18fb-4558-b077-e97be668cd27", 00:15:32.805 "is_configured": true, 00:15:32.805 "data_offset": 2048, 00:15:32.805 "data_size": 63488 00:15:32.805 }, 00:15:32.805 { 00:15:32.805 "name": "BaseBdev2", 00:15:32.805 "uuid": "593e1a83-39a1-4fa7-b7be-2de88403b7b9", 00:15:32.805 "is_configured": true, 00:15:32.805 "data_offset": 2048, 00:15:32.805 "data_size": 63488 00:15:32.805 }, 00:15:32.805 { 00:15:32.805 "name": "BaseBdev3", 00:15:32.805 "uuid": "1472a18b-ca17-4605-ae4a-4e1424a702a3", 00:15:32.805 "is_configured": true, 00:15:32.805 "data_offset": 2048, 00:15:32.805 "data_size": 63488 00:15:32.805 } 00:15:32.805 ] 00:15:32.805 }' 00:15:32.805 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.805 10:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.369 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:33.369 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:33.369 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:33.369 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:33.369 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:33.369 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:33.369 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:33.369 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:33.626 [2024-07-25 10:30:37.115123] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.626 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:33.626 "name": "Existed_Raid", 00:15:33.626 "aliases": [ 00:15:33.626 "209906ec-c4c7-489b-a6d8-90d64e00a84f" 00:15:33.626 ], 00:15:33.626 "product_name": "Raid Volume", 00:15:33.626 "block_size": 512, 00:15:33.626 "num_blocks": 63488, 00:15:33.626 "uuid": "209906ec-c4c7-489b-a6d8-90d64e00a84f", 00:15:33.626 "assigned_rate_limits": { 00:15:33.626 "rw_ios_per_sec": 0, 00:15:33.626 "rw_mbytes_per_sec": 0, 00:15:33.626 "r_mbytes_per_sec": 0, 00:15:33.626 "w_mbytes_per_sec": 0 00:15:33.626 }, 00:15:33.626 "claimed": false, 00:15:33.626 "zoned": false, 00:15:33.626 "supported_io_types": { 00:15:33.626 "read": true, 00:15:33.626 "write": true, 00:15:33.626 "unmap": false, 00:15:33.626 "flush": false, 00:15:33.626 "reset": true, 00:15:33.626 "nvme_admin": false, 00:15:33.626 "nvme_io": false, 00:15:33.626 "nvme_io_md": false, 00:15:33.626 "write_zeroes": true, 00:15:33.626 "zcopy": false, 00:15:33.626 "get_zone_info": false, 00:15:33.626 "zone_management": false, 00:15:33.626 "zone_append": false, 00:15:33.626 "compare": false, 00:15:33.626 "compare_and_write": false, 00:15:33.626 "abort": false, 00:15:33.626 "seek_hole": false, 00:15:33.626 "seek_data": false, 00:15:33.626 "copy": false, 00:15:33.626 "nvme_iov_md": false 00:15:33.626 }, 00:15:33.626 "memory_domains": [ 00:15:33.626 { 00:15:33.626 "dma_device_id": "system", 00:15:33.626 "dma_device_type": 1 00:15:33.626 }, 00:15:33.626 { 00:15:33.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.626 "dma_device_type": 2 00:15:33.626 }, 00:15:33.626 { 00:15:33.626 "dma_device_id": "system", 00:15:33.626 "dma_device_type": 1 00:15:33.626 }, 00:15:33.626 { 00:15:33.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.626 "dma_device_type": 2 00:15:33.626 }, 00:15:33.626 { 00:15:33.626 "dma_device_id": "system", 00:15:33.626 "dma_device_type": 1 00:15:33.626 }, 00:15:33.626 { 00:15:33.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.626 "dma_device_type": 2 00:15:33.626 } 00:15:33.626 ], 00:15:33.626 "driver_specific": { 00:15:33.626 "raid": { 00:15:33.626 "uuid": "209906ec-c4c7-489b-a6d8-90d64e00a84f", 00:15:33.626 "strip_size_kb": 0, 00:15:33.626 "state": "online", 00:15:33.626 "raid_level": "raid1", 00:15:33.626 "superblock": true, 00:15:33.626 "num_base_bdevs": 3, 00:15:33.626 "num_base_bdevs_discovered": 3, 00:15:33.626 "num_base_bdevs_operational": 3, 00:15:33.626 "base_bdevs_list": [ 00:15:33.626 { 00:15:33.626 "name": "NewBaseBdev", 00:15:33.626 "uuid": "9f226365-18fb-4558-b077-e97be668cd27", 00:15:33.626 "is_configured": true, 00:15:33.626 "data_offset": 2048, 00:15:33.626 "data_size": 63488 00:15:33.626 }, 00:15:33.626 { 00:15:33.626 "name": "BaseBdev2", 00:15:33.626 "uuid": "593e1a83-39a1-4fa7-b7be-2de88403b7b9", 00:15:33.626 "is_configured": true, 00:15:33.626 "data_offset": 2048, 00:15:33.626 "data_size": 63488 00:15:33.626 }, 00:15:33.626 { 00:15:33.626 "name": "BaseBdev3", 00:15:33.626 "uuid": "1472a18b-ca17-4605-ae4a-4e1424a702a3", 00:15:33.626 "is_configured": true, 00:15:33.626 "data_offset": 2048, 00:15:33.626 "data_size": 63488 00:15:33.626 } 00:15:33.626 ] 00:15:33.627 } 00:15:33.627 } 00:15:33.627 }' 00:15:33.627 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.627 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:33.627 BaseBdev2 00:15:33.627 BaseBdev3' 00:15:33.627 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:33.627 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:33.627 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:33.884 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:33.884 "name": "NewBaseBdev", 00:15:33.884 "aliases": [ 00:15:33.884 "9f226365-18fb-4558-b077-e97be668cd27" 00:15:33.884 ], 00:15:33.884 "product_name": "Malloc disk", 00:15:33.884 "block_size": 512, 00:15:33.884 "num_blocks": 65536, 00:15:33.884 "uuid": "9f226365-18fb-4558-b077-e97be668cd27", 00:15:33.884 "assigned_rate_limits": { 00:15:33.884 "rw_ios_per_sec": 0, 00:15:33.884 "rw_mbytes_per_sec": 0, 00:15:33.884 "r_mbytes_per_sec": 0, 00:15:33.884 "w_mbytes_per_sec": 0 00:15:33.884 }, 00:15:33.884 "claimed": true, 00:15:33.884 "claim_type": "exclusive_write", 00:15:33.884 "zoned": false, 00:15:33.884 "supported_io_types": { 00:15:33.884 "read": true, 00:15:33.884 "write": true, 00:15:33.884 "unmap": true, 00:15:33.884 "flush": true, 00:15:33.884 "reset": true, 00:15:33.884 "nvme_admin": false, 00:15:33.884 "nvme_io": false, 00:15:33.884 "nvme_io_md": false, 00:15:33.884 "write_zeroes": true, 00:15:33.884 "zcopy": true, 00:15:33.884 "get_zone_info": false, 00:15:33.884 "zone_management": false, 00:15:33.884 "zone_append": false, 00:15:33.884 "compare": false, 00:15:33.884 "compare_and_write": false, 00:15:33.884 "abort": true, 00:15:33.884 "seek_hole": false, 00:15:33.884 "seek_data": false, 00:15:33.884 "copy": true, 00:15:33.884 "nvme_iov_md": false 00:15:33.884 }, 00:15:33.884 "memory_domains": [ 00:15:33.884 { 00:15:33.884 "dma_device_id": "system", 00:15:33.884 "dma_device_type": 1 00:15:33.884 }, 00:15:33.884 { 00:15:33.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.884 "dma_device_type": 2 00:15:33.884 } 00:15:33.884 ], 00:15:33.884 "driver_specific": {} 00:15:33.884 }' 00:15:33.884 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.884 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.884 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:33.884 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.884 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.884 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:33.884 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.884 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.141 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:34.141 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.142 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.142 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:34.142 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.142 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:34.142 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:34.399 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:34.399 "name": "BaseBdev2", 00:15:34.399 "aliases": [ 00:15:34.399 "593e1a83-39a1-4fa7-b7be-2de88403b7b9" 00:15:34.399 ], 00:15:34.399 "product_name": "Malloc disk", 00:15:34.399 "block_size": 512, 00:15:34.399 "num_blocks": 65536, 00:15:34.399 "uuid": "593e1a83-39a1-4fa7-b7be-2de88403b7b9", 00:15:34.399 "assigned_rate_limits": { 00:15:34.399 "rw_ios_per_sec": 0, 00:15:34.399 "rw_mbytes_per_sec": 0, 00:15:34.399 "r_mbytes_per_sec": 0, 00:15:34.399 "w_mbytes_per_sec": 0 00:15:34.399 }, 00:15:34.399 "claimed": true, 00:15:34.399 "claim_type": "exclusive_write", 00:15:34.399 "zoned": false, 00:15:34.399 "supported_io_types": { 00:15:34.399 "read": true, 00:15:34.399 "write": true, 00:15:34.399 "unmap": true, 00:15:34.399 "flush": true, 00:15:34.399 "reset": true, 00:15:34.399 "nvme_admin": false, 00:15:34.399 "nvme_io": false, 00:15:34.399 "nvme_io_md": false, 00:15:34.399 "write_zeroes": true, 00:15:34.399 "zcopy": true, 00:15:34.399 "get_zone_info": false, 00:15:34.399 "zone_management": false, 00:15:34.399 "zone_append": false, 00:15:34.399 "compare": false, 00:15:34.399 "compare_and_write": false, 00:15:34.399 "abort": true, 00:15:34.399 "seek_hole": false, 00:15:34.399 "seek_data": false, 00:15:34.399 "copy": true, 00:15:34.399 "nvme_iov_md": false 00:15:34.399 }, 00:15:34.399 "memory_domains": [ 00:15:34.399 { 00:15:34.399 "dma_device_id": "system", 00:15:34.399 "dma_device_type": 1 00:15:34.399 }, 00:15:34.399 { 00:15:34.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.399 "dma_device_type": 2 00:15:34.399 } 00:15:34.399 ], 00:15:34.399 "driver_specific": {} 00:15:34.399 }' 00:15:34.399 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.399 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.399 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:34.399 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.399 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.399 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:34.399 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.399 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.656 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:34.656 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.656 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.657 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:34.657 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.657 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:34.657 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:34.914 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:34.914 "name": "BaseBdev3", 00:15:34.914 "aliases": [ 00:15:34.914 "1472a18b-ca17-4605-ae4a-4e1424a702a3" 00:15:34.914 ], 00:15:34.914 "product_name": "Malloc disk", 00:15:34.914 "block_size": 512, 00:15:34.914 "num_blocks": 65536, 00:15:34.914 "uuid": "1472a18b-ca17-4605-ae4a-4e1424a702a3", 00:15:34.914 "assigned_rate_limits": { 00:15:34.914 "rw_ios_per_sec": 0, 00:15:34.914 "rw_mbytes_per_sec": 0, 00:15:34.914 "r_mbytes_per_sec": 0, 00:15:34.914 "w_mbytes_per_sec": 0 00:15:34.914 }, 00:15:34.914 "claimed": true, 00:15:34.914 "claim_type": "exclusive_write", 00:15:34.914 "zoned": false, 00:15:34.914 "supported_io_types": { 00:15:34.914 "read": true, 00:15:34.914 "write": true, 00:15:34.914 "unmap": true, 00:15:34.914 "flush": true, 00:15:34.914 "reset": true, 00:15:34.914 "nvme_admin": false, 00:15:34.914 "nvme_io": false, 00:15:34.914 "nvme_io_md": false, 00:15:34.914 "write_zeroes": true, 00:15:34.914 "zcopy": true, 00:15:34.914 "get_zone_info": false, 00:15:34.914 "zone_management": false, 00:15:34.914 "zone_append": false, 00:15:34.914 "compare": false, 00:15:34.914 "compare_and_write": false, 00:15:34.914 "abort": true, 00:15:34.914 "seek_hole": false, 00:15:34.914 "seek_data": false, 00:15:34.914 "copy": true, 00:15:34.914 "nvme_iov_md": false 00:15:34.914 }, 00:15:34.914 "memory_domains": [ 00:15:34.914 { 00:15:34.914 "dma_device_id": "system", 00:15:34.914 "dma_device_type": 1 00:15:34.914 }, 00:15:34.914 { 00:15:34.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.914 "dma_device_type": 2 00:15:34.914 } 00:15:34.914 ], 00:15:34.914 "driver_specific": {} 00:15:34.914 }' 00:15:34.914 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.914 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.914 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:34.914 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.914 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.914 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:34.914 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.914 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.171 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:35.171 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.171 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.171 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:35.171 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:35.429 [2024-07-25 10:30:38.927624] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.429 [2024-07-25 10:30:38.927646] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.429 [2024-07-25 10:30:38.927705] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.429 [2024-07-25 10:30:38.927921] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.429 [2024-07-25 10:30:38.927934] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x208b220 name Existed_Raid, state offline 00:15:35.429 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2379833 00:15:35.429 10:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2379833 ']' 00:15:35.429 10:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2379833 00:15:35.429 10:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:35.429 10:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.429 10:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2379833 00:15:35.429 10:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.429 10:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.429 10:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2379833' 00:15:35.429 killing process with pid 2379833 00:15:35.429 10:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2379833 00:15:35.429 [2024-07-25 10:30:38.974477] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.429 10:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2379833 00:15:35.429 [2024-07-25 10:30:39.008042] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.687 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:35.687 00:15:35.687 real 0m26.999s 00:15:35.687 user 0m50.575s 00:15:35.687 sys 0m3.863s 00:15:35.687 10:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:35.687 10:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.687 ************************************ 00:15:35.687 END TEST raid_state_function_test_sb 00:15:35.687 ************************************ 00:15:35.687 10:30:39 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:15:35.687 10:30:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:35.687 10:30:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:35.687 10:30:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.687 ************************************ 00:15:35.687 START TEST raid_superblock_test 00:15:35.687 ************************************ 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2383619 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2383619 /var/tmp/spdk-raid.sock 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2383619 ']' 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:35.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.687 10:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.687 [2024-07-25 10:30:39.369160] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:15:35.687 [2024-07-25 10:30:39.369233] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383619 ] 00:15:35.945 [2024-07-25 10:30:39.451782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.945 [2024-07-25 10:30:39.570968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.945 [2024-07-25 10:30:39.641942] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.945 [2024-07-25 10:30:39.641976] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.879 10:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.879 10:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:36.879 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:36.879 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:36.879 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:36.879 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:36.879 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:36.879 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:36.879 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:36.879 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:36.879 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:36.879 malloc1 00:15:36.879 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:37.137 [2024-07-25 10:30:40.794677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:37.137 [2024-07-25 10:30:40.794739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.137 [2024-07-25 10:30:40.794766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7ec2b0 00:15:37.137 [2024-07-25 10:30:40.794782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.137 [2024-07-25 10:30:40.796430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.137 [2024-07-25 10:30:40.796458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:37.137 pt1 00:15:37.137 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:37.137 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:37.137 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:37.137 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:37.137 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:37.137 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:37.137 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:37.137 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:37.137 10:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:37.395 malloc2 00:15:37.395 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:37.652 [2024-07-25 10:30:41.295357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:37.652 [2024-07-25 10:30:41.295445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.652 [2024-07-25 10:30:41.295466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x99f1e0 00:15:37.652 [2024-07-25 10:30:41.295480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.652 [2024-07-25 10:30:41.297266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.652 [2024-07-25 10:30:41.297292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:37.652 pt2 00:15:37.652 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:37.652 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:37.652 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:15:37.652 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:15:37.652 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:37.652 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:37.652 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:37.652 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:37.652 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:37.910 malloc3 00:15:37.910 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:38.168 [2024-07-25 10:30:41.791478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:38.168 [2024-07-25 10:30:41.791535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.168 [2024-07-25 10:30:41.791557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9854d0 00:15:38.168 [2024-07-25 10:30:41.791595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.168 [2024-07-25 10:30:41.793075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.168 [2024-07-25 10:30:41.793113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:38.168 pt3 00:15:38.168 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:38.168 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:38.168 10:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:38.426 [2024-07-25 10:30:42.028120] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:38.426 [2024-07-25 10:30:42.029310] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:38.426 [2024-07-25 10:30:42.029373] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:38.426 [2024-07-25 10:30:42.029546] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x984120 00:15:38.426 [2024-07-25 10:30:42.029564] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:38.426 [2024-07-25 10:30:42.029746] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9853f0 00:15:38.426 [2024-07-25 10:30:42.029928] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x984120 00:15:38.426 [2024-07-25 10:30:42.029945] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x984120 00:15:38.426 [2024-07-25 10:30:42.030053] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.426 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:38.426 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:38.426 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:38.426 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:38.426 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:38.426 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:38.426 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.426 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.426 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.426 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.426 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.426 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.684 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:38.684 "name": "raid_bdev1", 00:15:38.684 "uuid": "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc", 00:15:38.684 "strip_size_kb": 0, 00:15:38.684 "state": "online", 00:15:38.684 "raid_level": "raid1", 00:15:38.684 "superblock": true, 00:15:38.684 "num_base_bdevs": 3, 00:15:38.684 "num_base_bdevs_discovered": 3, 00:15:38.684 "num_base_bdevs_operational": 3, 00:15:38.684 "base_bdevs_list": [ 00:15:38.684 { 00:15:38.684 "name": "pt1", 00:15:38.684 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.684 "is_configured": true, 00:15:38.684 "data_offset": 2048, 00:15:38.684 "data_size": 63488 00:15:38.684 }, 00:15:38.684 { 00:15:38.684 "name": "pt2", 00:15:38.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.684 "is_configured": true, 00:15:38.684 "data_offset": 2048, 00:15:38.684 "data_size": 63488 00:15:38.684 }, 00:15:38.684 { 00:15:38.684 "name": "pt3", 00:15:38.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.684 "is_configured": true, 00:15:38.684 "data_offset": 2048, 00:15:38.684 "data_size": 63488 00:15:38.684 } 00:15:38.684 ] 00:15:38.684 }' 00:15:38.684 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:38.684 10:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.249 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:39.249 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:39.249 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:39.249 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:39.249 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:39.249 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:39.249 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:39.249 10:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:39.507 [2024-07-25 10:30:43.059133] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.507 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:39.507 "name": "raid_bdev1", 00:15:39.507 "aliases": [ 00:15:39.507 "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc" 00:15:39.507 ], 00:15:39.507 "product_name": "Raid Volume", 00:15:39.507 "block_size": 512, 00:15:39.507 "num_blocks": 63488, 00:15:39.507 "uuid": "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc", 00:15:39.507 "assigned_rate_limits": { 00:15:39.507 "rw_ios_per_sec": 0, 00:15:39.507 "rw_mbytes_per_sec": 0, 00:15:39.507 "r_mbytes_per_sec": 0, 00:15:39.507 "w_mbytes_per_sec": 0 00:15:39.507 }, 00:15:39.507 "claimed": false, 00:15:39.507 "zoned": false, 00:15:39.507 "supported_io_types": { 00:15:39.507 "read": true, 00:15:39.507 "write": true, 00:15:39.507 "unmap": false, 00:15:39.507 "flush": false, 00:15:39.507 "reset": true, 00:15:39.507 "nvme_admin": false, 00:15:39.507 "nvme_io": false, 00:15:39.507 "nvme_io_md": false, 00:15:39.507 "write_zeroes": true, 00:15:39.507 "zcopy": false, 00:15:39.507 "get_zone_info": false, 00:15:39.507 "zone_management": false, 00:15:39.507 "zone_append": false, 00:15:39.507 "compare": false, 00:15:39.507 "compare_and_write": false, 00:15:39.507 "abort": false, 00:15:39.507 "seek_hole": false, 00:15:39.507 "seek_data": false, 00:15:39.507 "copy": false, 00:15:39.507 "nvme_iov_md": false 00:15:39.507 }, 00:15:39.507 "memory_domains": [ 00:15:39.507 { 00:15:39.507 "dma_device_id": "system", 00:15:39.507 "dma_device_type": 1 00:15:39.507 }, 00:15:39.507 { 00:15:39.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.507 "dma_device_type": 2 00:15:39.507 }, 00:15:39.507 { 00:15:39.507 "dma_device_id": "system", 00:15:39.507 "dma_device_type": 1 00:15:39.507 }, 00:15:39.507 { 00:15:39.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.507 "dma_device_type": 2 00:15:39.507 }, 00:15:39.507 { 00:15:39.507 "dma_device_id": "system", 00:15:39.507 "dma_device_type": 1 00:15:39.507 }, 00:15:39.507 { 00:15:39.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.507 "dma_device_type": 2 00:15:39.507 } 00:15:39.507 ], 00:15:39.507 "driver_specific": { 00:15:39.507 "raid": { 00:15:39.507 "uuid": "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc", 00:15:39.507 "strip_size_kb": 0, 00:15:39.507 "state": "online", 00:15:39.507 "raid_level": "raid1", 00:15:39.507 "superblock": true, 00:15:39.507 "num_base_bdevs": 3, 00:15:39.507 "num_base_bdevs_discovered": 3, 00:15:39.507 "num_base_bdevs_operational": 3, 00:15:39.507 "base_bdevs_list": [ 00:15:39.507 { 00:15:39.507 "name": "pt1", 00:15:39.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:39.507 "is_configured": true, 00:15:39.507 "data_offset": 2048, 00:15:39.507 "data_size": 63488 00:15:39.508 }, 00:15:39.508 { 00:15:39.508 "name": "pt2", 00:15:39.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.508 "is_configured": true, 00:15:39.508 "data_offset": 2048, 00:15:39.508 "data_size": 63488 00:15:39.508 }, 00:15:39.508 { 00:15:39.508 "name": "pt3", 00:15:39.508 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:39.508 "is_configured": true, 00:15:39.508 "data_offset": 2048, 00:15:39.508 "data_size": 63488 00:15:39.508 } 00:15:39.508 ] 00:15:39.508 } 00:15:39.508 } 00:15:39.508 }' 00:15:39.508 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.508 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:39.508 pt2 00:15:39.508 pt3' 00:15:39.508 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:39.508 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:39.508 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:39.765 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:39.765 "name": "pt1", 00:15:39.765 "aliases": [ 00:15:39.765 "00000000-0000-0000-0000-000000000001" 00:15:39.765 ], 00:15:39.765 "product_name": "passthru", 00:15:39.765 "block_size": 512, 00:15:39.765 "num_blocks": 65536, 00:15:39.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:39.765 "assigned_rate_limits": { 00:15:39.765 "rw_ios_per_sec": 0, 00:15:39.765 "rw_mbytes_per_sec": 0, 00:15:39.765 "r_mbytes_per_sec": 0, 00:15:39.765 "w_mbytes_per_sec": 0 00:15:39.765 }, 00:15:39.765 "claimed": true, 00:15:39.765 "claim_type": "exclusive_write", 00:15:39.765 "zoned": false, 00:15:39.765 "supported_io_types": { 00:15:39.765 "read": true, 00:15:39.765 "write": true, 00:15:39.765 "unmap": true, 00:15:39.766 "flush": true, 00:15:39.766 "reset": true, 00:15:39.766 "nvme_admin": false, 00:15:39.766 "nvme_io": false, 00:15:39.766 "nvme_io_md": false, 00:15:39.766 "write_zeroes": true, 00:15:39.766 "zcopy": true, 00:15:39.766 "get_zone_info": false, 00:15:39.766 "zone_management": false, 00:15:39.766 "zone_append": false, 00:15:39.766 "compare": false, 00:15:39.766 "compare_and_write": false, 00:15:39.766 "abort": true, 00:15:39.766 "seek_hole": false, 00:15:39.766 "seek_data": false, 00:15:39.766 "copy": true, 00:15:39.766 "nvme_iov_md": false 00:15:39.766 }, 00:15:39.766 "memory_domains": [ 00:15:39.766 { 00:15:39.766 "dma_device_id": "system", 00:15:39.766 "dma_device_type": 1 00:15:39.766 }, 00:15:39.766 { 00:15:39.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.766 "dma_device_type": 2 00:15:39.766 } 00:15:39.766 ], 00:15:39.766 "driver_specific": { 00:15:39.766 "passthru": { 00:15:39.766 "name": "pt1", 00:15:39.766 "base_bdev_name": "malloc1" 00:15:39.766 } 00:15:39.766 } 00:15:39.766 }' 00:15:39.766 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.766 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.766 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:39.766 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.023 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.023 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:40.023 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.023 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.023 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.023 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.023 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.023 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.023 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.023 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:40.023 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.281 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.281 "name": "pt2", 00:15:40.281 "aliases": [ 00:15:40.281 "00000000-0000-0000-0000-000000000002" 00:15:40.281 ], 00:15:40.281 "product_name": "passthru", 00:15:40.281 "block_size": 512, 00:15:40.281 "num_blocks": 65536, 00:15:40.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.281 "assigned_rate_limits": { 00:15:40.281 "rw_ios_per_sec": 0, 00:15:40.281 "rw_mbytes_per_sec": 0, 00:15:40.281 "r_mbytes_per_sec": 0, 00:15:40.281 "w_mbytes_per_sec": 0 00:15:40.281 }, 00:15:40.281 "claimed": true, 00:15:40.281 "claim_type": "exclusive_write", 00:15:40.281 "zoned": false, 00:15:40.281 "supported_io_types": { 00:15:40.281 "read": true, 00:15:40.281 "write": true, 00:15:40.281 "unmap": true, 00:15:40.281 "flush": true, 00:15:40.281 "reset": true, 00:15:40.281 "nvme_admin": false, 00:15:40.281 "nvme_io": false, 00:15:40.281 "nvme_io_md": false, 00:15:40.281 "write_zeroes": true, 00:15:40.281 "zcopy": true, 00:15:40.281 "get_zone_info": false, 00:15:40.281 "zone_management": false, 00:15:40.281 "zone_append": false, 00:15:40.281 "compare": false, 00:15:40.281 "compare_and_write": false, 00:15:40.281 "abort": true, 00:15:40.281 "seek_hole": false, 00:15:40.281 "seek_data": false, 00:15:40.281 "copy": true, 00:15:40.281 "nvme_iov_md": false 00:15:40.281 }, 00:15:40.281 "memory_domains": [ 00:15:40.281 { 00:15:40.281 "dma_device_id": "system", 00:15:40.281 "dma_device_type": 1 00:15:40.281 }, 00:15:40.281 { 00:15:40.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.281 "dma_device_type": 2 00:15:40.281 } 00:15:40.281 ], 00:15:40.281 "driver_specific": { 00:15:40.281 "passthru": { 00:15:40.281 "name": "pt2", 00:15:40.281 "base_bdev_name": "malloc2" 00:15:40.281 } 00:15:40.281 } 00:15:40.281 }' 00:15:40.281 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.281 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.281 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.281 10:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.539 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.539 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:40.539 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.539 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.539 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.539 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.539 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.539 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.539 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.539 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:40.539 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.797 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.797 "name": "pt3", 00:15:40.797 "aliases": [ 00:15:40.797 "00000000-0000-0000-0000-000000000003" 00:15:40.797 ], 00:15:40.797 "product_name": "passthru", 00:15:40.797 "block_size": 512, 00:15:40.797 "num_blocks": 65536, 00:15:40.797 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:40.797 "assigned_rate_limits": { 00:15:40.797 "rw_ios_per_sec": 0, 00:15:40.797 "rw_mbytes_per_sec": 0, 00:15:40.797 "r_mbytes_per_sec": 0, 00:15:40.797 "w_mbytes_per_sec": 0 00:15:40.797 }, 00:15:40.797 "claimed": true, 00:15:40.797 "claim_type": "exclusive_write", 00:15:40.797 "zoned": false, 00:15:40.797 "supported_io_types": { 00:15:40.797 "read": true, 00:15:40.797 "write": true, 00:15:40.797 "unmap": true, 00:15:40.797 "flush": true, 00:15:40.797 "reset": true, 00:15:40.797 "nvme_admin": false, 00:15:40.797 "nvme_io": false, 00:15:40.797 "nvme_io_md": false, 00:15:40.797 "write_zeroes": true, 00:15:40.797 "zcopy": true, 00:15:40.797 "get_zone_info": false, 00:15:40.797 "zone_management": false, 00:15:40.797 "zone_append": false, 00:15:40.797 "compare": false, 00:15:40.797 "compare_and_write": false, 00:15:40.797 "abort": true, 00:15:40.797 "seek_hole": false, 00:15:40.797 "seek_data": false, 00:15:40.797 "copy": true, 00:15:40.797 "nvme_iov_md": false 00:15:40.797 }, 00:15:40.798 "memory_domains": [ 00:15:40.798 { 00:15:40.798 "dma_device_id": "system", 00:15:40.798 "dma_device_type": 1 00:15:40.798 }, 00:15:40.798 { 00:15:40.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.798 "dma_device_type": 2 00:15:40.798 } 00:15:40.798 ], 00:15:40.798 "driver_specific": { 00:15:40.798 "passthru": { 00:15:40.798 "name": "pt3", 00:15:40.798 "base_bdev_name": "malloc3" 00:15:40.798 } 00:15:40.798 } 00:15:40.798 }' 00:15:40.798 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.798 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.056 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.056 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.056 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.056 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.056 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.056 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.056 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.056 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.056 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.056 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.056 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:41.056 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:41.314 [2024-07-25 10:30:44.964199] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.314 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=dc4fb4be-d30b-4bfe-a351-b18cf7b712bc 00:15:41.314 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z dc4fb4be-d30b-4bfe-a351-b18cf7b712bc ']' 00:15:41.314 10:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:41.572 [2024-07-25 10:30:45.216524] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.572 [2024-07-25 10:30:45.216551] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.572 [2024-07-25 10:30:45.216622] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.572 [2024-07-25 10:30:45.216696] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.572 [2024-07-25 10:30:45.216708] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x984120 name raid_bdev1, state offline 00:15:41.572 10:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.572 10:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:41.831 10:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:41.831 10:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:41.831 10:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:41.831 10:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:42.089 10:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:42.089 10:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:42.347 10:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:42.347 10:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:42.606 10:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:42.606 10:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:42.864 10:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:42.864 10:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:42.864 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:42.864 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:42.864 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:42.864 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.864 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:42.864 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.864 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:42.864 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.864 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:42.864 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:42.864 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:43.122 [2024-07-25 10:30:46.696462] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:43.122 [2024-07-25 10:30:46.697788] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:43.122 [2024-07-25 10:30:46.697832] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:43.122 [2024-07-25 10:30:46.697894] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:43.122 [2024-07-25 10:30:46.697961] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:43.122 [2024-07-25 10:30:46.697988] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:43.122 [2024-07-25 10:30:46.698008] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.122 [2024-07-25 10:30:46.698018] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x98f170 name raid_bdev1, state configuring 00:15:43.122 request: 00:15:43.122 { 00:15:43.122 "name": "raid_bdev1", 00:15:43.122 "raid_level": "raid1", 00:15:43.122 "base_bdevs": [ 00:15:43.122 "malloc1", 00:15:43.122 "malloc2", 00:15:43.122 "malloc3" 00:15:43.122 ], 00:15:43.122 "superblock": false, 00:15:43.122 "method": "bdev_raid_create", 00:15:43.122 "req_id": 1 00:15:43.122 } 00:15:43.122 Got JSON-RPC error response 00:15:43.122 response: 00:15:43.122 { 00:15:43.122 "code": -17, 00:15:43.122 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:43.122 } 00:15:43.122 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:43.122 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:43.122 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:43.122 10:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:43.122 10:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.122 10:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:43.380 10:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:43.380 10:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:43.380 10:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:43.638 [2024-07-25 10:30:47.197737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:43.638 [2024-07-25 10:30:47.197822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.638 [2024-07-25 10:30:47.197847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x983df0 00:15:43.638 [2024-07-25 10:30:47.197860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.638 [2024-07-25 10:30:47.199529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.638 [2024-07-25 10:30:47.199553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:43.638 [2024-07-25 10:30:47.199664] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:43.638 [2024-07-25 10:30:47.199698] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:43.638 pt1 00:15:43.638 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:43.638 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:43.638 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.638 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:43.638 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:43.638 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:43.639 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.639 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.639 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.639 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.639 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.639 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.897 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.897 "name": "raid_bdev1", 00:15:43.897 "uuid": "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc", 00:15:43.897 "strip_size_kb": 0, 00:15:43.897 "state": "configuring", 00:15:43.897 "raid_level": "raid1", 00:15:43.897 "superblock": true, 00:15:43.897 "num_base_bdevs": 3, 00:15:43.897 "num_base_bdevs_discovered": 1, 00:15:43.897 "num_base_bdevs_operational": 3, 00:15:43.897 "base_bdevs_list": [ 00:15:43.897 { 00:15:43.897 "name": "pt1", 00:15:43.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:43.897 "is_configured": true, 00:15:43.897 "data_offset": 2048, 00:15:43.897 "data_size": 63488 00:15:43.897 }, 00:15:43.897 { 00:15:43.897 "name": null, 00:15:43.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.897 "is_configured": false, 00:15:43.897 "data_offset": 2048, 00:15:43.897 "data_size": 63488 00:15:43.897 }, 00:15:43.897 { 00:15:43.897 "name": null, 00:15:43.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.897 "is_configured": false, 00:15:43.897 "data_offset": 2048, 00:15:43.897 "data_size": 63488 00:15:43.897 } 00:15:43.897 ] 00:15:43.897 }' 00:15:43.897 10:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.897 10:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.463 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:15:44.463 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:44.722 [2024-07-25 10:30:48.252557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:44.722 [2024-07-25 10:30:48.252642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.722 [2024-07-25 10:30:48.252668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7e37f0 00:15:44.722 [2024-07-25 10:30:48.252682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.722 [2024-07-25 10:30:48.253086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.722 [2024-07-25 10:30:48.253131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:44.722 [2024-07-25 10:30:48.253222] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:44.722 [2024-07-25 10:30:48.253247] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:44.722 pt2 00:15:44.722 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:44.980 [2024-07-25 10:30:48.501271] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:44.980 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:44.980 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:44.980 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:44.980 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:44.981 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:44.981 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:44.981 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.981 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.981 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.981 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.981 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.981 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.239 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.239 "name": "raid_bdev1", 00:15:45.239 "uuid": "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc", 00:15:45.239 "strip_size_kb": 0, 00:15:45.239 "state": "configuring", 00:15:45.239 "raid_level": "raid1", 00:15:45.239 "superblock": true, 00:15:45.239 "num_base_bdevs": 3, 00:15:45.239 "num_base_bdevs_discovered": 1, 00:15:45.239 "num_base_bdevs_operational": 3, 00:15:45.239 "base_bdevs_list": [ 00:15:45.239 { 00:15:45.239 "name": "pt1", 00:15:45.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.239 "is_configured": true, 00:15:45.239 "data_offset": 2048, 00:15:45.239 "data_size": 63488 00:15:45.239 }, 00:15:45.239 { 00:15:45.239 "name": null, 00:15:45.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.239 "is_configured": false, 00:15:45.239 "data_offset": 2048, 00:15:45.239 "data_size": 63488 00:15:45.239 }, 00:15:45.239 { 00:15:45.239 "name": null, 00:15:45.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.239 "is_configured": false, 00:15:45.239 "data_offset": 2048, 00:15:45.239 "data_size": 63488 00:15:45.239 } 00:15:45.239 ] 00:15:45.239 }' 00:15:45.239 10:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.239 10:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.804 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:45.804 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:45.804 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.062 [2024-07-25 10:30:49.523911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.062 [2024-07-25 10:30:49.523978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.062 [2024-07-25 10:30:49.524002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7e3b00 00:15:46.062 [2024-07-25 10:30:49.524018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.062 [2024-07-25 10:30:49.524460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.062 [2024-07-25 10:30:49.524486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.062 [2024-07-25 10:30:49.524568] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:46.062 [2024-07-25 10:30:49.524597] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.062 pt2 00:15:46.062 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:46.062 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:46.062 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:46.320 [2024-07-25 10:30:49.772562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:46.320 [2024-07-25 10:30:49.772600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.320 [2024-07-25 10:30:49.772621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x98f4b0 00:15:46.320 [2024-07-25 10:30:49.772636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.320 [2024-07-25 10:30:49.772918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.320 [2024-07-25 10:30:49.772944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:46.320 [2024-07-25 10:30:49.773001] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:46.320 [2024-07-25 10:30:49.773026] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:46.320 [2024-07-25 10:30:49.773158] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x7e3e20 00:15:46.320 [2024-07-25 10:30:49.773175] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:46.320 [2024-07-25 10:30:49.773349] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x7e7410 00:15:46.320 [2024-07-25 10:30:49.773513] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x7e3e20 00:15:46.320 [2024-07-25 10:30:49.773529] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x7e3e20 00:15:46.320 [2024-07-25 10:30:49.773642] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.320 pt3 00:15:46.320 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:46.320 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:46.320 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:46.321 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:46.321 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:46.321 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:46.321 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:46.321 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:46.321 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.321 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.321 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.321 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.321 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.321 10:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.579 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.579 "name": "raid_bdev1", 00:15:46.579 "uuid": "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc", 00:15:46.579 "strip_size_kb": 0, 00:15:46.579 "state": "online", 00:15:46.579 "raid_level": "raid1", 00:15:46.579 "superblock": true, 00:15:46.579 "num_base_bdevs": 3, 00:15:46.579 "num_base_bdevs_discovered": 3, 00:15:46.579 "num_base_bdevs_operational": 3, 00:15:46.579 "base_bdevs_list": [ 00:15:46.579 { 00:15:46.579 "name": "pt1", 00:15:46.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:46.579 "is_configured": true, 00:15:46.579 "data_offset": 2048, 00:15:46.579 "data_size": 63488 00:15:46.579 }, 00:15:46.579 { 00:15:46.579 "name": "pt2", 00:15:46.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.579 "is_configured": true, 00:15:46.579 "data_offset": 2048, 00:15:46.579 "data_size": 63488 00:15:46.579 }, 00:15:46.579 { 00:15:46.579 "name": "pt3", 00:15:46.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.579 "is_configured": true, 00:15:46.579 "data_offset": 2048, 00:15:46.579 "data_size": 63488 00:15:46.579 } 00:15:46.579 ] 00:15:46.579 }' 00:15:46.579 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.579 10:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.144 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:47.144 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:47.144 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:47.144 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:47.144 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:47.144 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:47.144 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:47.144 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:47.144 [2024-07-25 10:30:50.815877] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.144 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:47.144 "name": "raid_bdev1", 00:15:47.144 "aliases": [ 00:15:47.145 "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc" 00:15:47.145 ], 00:15:47.145 "product_name": "Raid Volume", 00:15:47.145 "block_size": 512, 00:15:47.145 "num_blocks": 63488, 00:15:47.145 "uuid": "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc", 00:15:47.145 "assigned_rate_limits": { 00:15:47.145 "rw_ios_per_sec": 0, 00:15:47.145 "rw_mbytes_per_sec": 0, 00:15:47.145 "r_mbytes_per_sec": 0, 00:15:47.145 "w_mbytes_per_sec": 0 00:15:47.145 }, 00:15:47.145 "claimed": false, 00:15:47.145 "zoned": false, 00:15:47.145 "supported_io_types": { 00:15:47.145 "read": true, 00:15:47.145 "write": true, 00:15:47.145 "unmap": false, 00:15:47.145 "flush": false, 00:15:47.145 "reset": true, 00:15:47.145 "nvme_admin": false, 00:15:47.145 "nvme_io": false, 00:15:47.145 "nvme_io_md": false, 00:15:47.145 "write_zeroes": true, 00:15:47.145 "zcopy": false, 00:15:47.145 "get_zone_info": false, 00:15:47.145 "zone_management": false, 00:15:47.145 "zone_append": false, 00:15:47.145 "compare": false, 00:15:47.145 "compare_and_write": false, 00:15:47.145 "abort": false, 00:15:47.145 "seek_hole": false, 00:15:47.145 "seek_data": false, 00:15:47.145 "copy": false, 00:15:47.145 "nvme_iov_md": false 00:15:47.145 }, 00:15:47.145 "memory_domains": [ 00:15:47.145 { 00:15:47.145 "dma_device_id": "system", 00:15:47.145 "dma_device_type": 1 00:15:47.145 }, 00:15:47.145 { 00:15:47.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.145 "dma_device_type": 2 00:15:47.145 }, 00:15:47.145 { 00:15:47.145 "dma_device_id": "system", 00:15:47.145 "dma_device_type": 1 00:15:47.145 }, 00:15:47.145 { 00:15:47.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.145 "dma_device_type": 2 00:15:47.145 }, 00:15:47.145 { 00:15:47.145 "dma_device_id": "system", 00:15:47.145 "dma_device_type": 1 00:15:47.145 }, 00:15:47.145 { 00:15:47.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.145 "dma_device_type": 2 00:15:47.145 } 00:15:47.145 ], 00:15:47.145 "driver_specific": { 00:15:47.145 "raid": { 00:15:47.145 "uuid": "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc", 00:15:47.145 "strip_size_kb": 0, 00:15:47.145 "state": "online", 00:15:47.145 "raid_level": "raid1", 00:15:47.145 "superblock": true, 00:15:47.145 "num_base_bdevs": 3, 00:15:47.145 "num_base_bdevs_discovered": 3, 00:15:47.145 "num_base_bdevs_operational": 3, 00:15:47.145 "base_bdevs_list": [ 00:15:47.145 { 00:15:47.145 "name": "pt1", 00:15:47.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.145 "is_configured": true, 00:15:47.145 "data_offset": 2048, 00:15:47.145 "data_size": 63488 00:15:47.145 }, 00:15:47.145 { 00:15:47.145 "name": "pt2", 00:15:47.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.145 "is_configured": true, 00:15:47.145 "data_offset": 2048, 00:15:47.145 "data_size": 63488 00:15:47.145 }, 00:15:47.145 { 00:15:47.145 "name": "pt3", 00:15:47.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.145 "is_configured": true, 00:15:47.145 "data_offset": 2048, 00:15:47.145 "data_size": 63488 00:15:47.145 } 00:15:47.145 ] 00:15:47.145 } 00:15:47.145 } 00:15:47.145 }' 00:15:47.145 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.403 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:47.403 pt2 00:15:47.403 pt3' 00:15:47.403 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.403 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:47.403 10:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:47.661 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:47.661 "name": "pt1", 00:15:47.661 "aliases": [ 00:15:47.661 "00000000-0000-0000-0000-000000000001" 00:15:47.661 ], 00:15:47.661 "product_name": "passthru", 00:15:47.661 "block_size": 512, 00:15:47.661 "num_blocks": 65536, 00:15:47.661 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.661 "assigned_rate_limits": { 00:15:47.661 "rw_ios_per_sec": 0, 00:15:47.661 "rw_mbytes_per_sec": 0, 00:15:47.661 "r_mbytes_per_sec": 0, 00:15:47.661 "w_mbytes_per_sec": 0 00:15:47.661 }, 00:15:47.661 "claimed": true, 00:15:47.661 "claim_type": "exclusive_write", 00:15:47.661 "zoned": false, 00:15:47.661 "supported_io_types": { 00:15:47.661 "read": true, 00:15:47.661 "write": true, 00:15:47.661 "unmap": true, 00:15:47.661 "flush": true, 00:15:47.661 "reset": true, 00:15:47.661 "nvme_admin": false, 00:15:47.661 "nvme_io": false, 00:15:47.661 "nvme_io_md": false, 00:15:47.661 "write_zeroes": true, 00:15:47.661 "zcopy": true, 00:15:47.661 "get_zone_info": false, 00:15:47.661 "zone_management": false, 00:15:47.661 "zone_append": false, 00:15:47.661 "compare": false, 00:15:47.661 "compare_and_write": false, 00:15:47.661 "abort": true, 00:15:47.661 "seek_hole": false, 00:15:47.661 "seek_data": false, 00:15:47.661 "copy": true, 00:15:47.661 "nvme_iov_md": false 00:15:47.661 }, 00:15:47.661 "memory_domains": [ 00:15:47.661 { 00:15:47.661 "dma_device_id": "system", 00:15:47.661 "dma_device_type": 1 00:15:47.661 }, 00:15:47.661 { 00:15:47.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.661 "dma_device_type": 2 00:15:47.661 } 00:15:47.661 ], 00:15:47.661 "driver_specific": { 00:15:47.661 "passthru": { 00:15:47.661 "name": "pt1", 00:15:47.661 "base_bdev_name": "malloc1" 00:15:47.661 } 00:15:47.661 } 00:15:47.661 }' 00:15:47.661 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.661 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.661 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:47.661 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.661 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.661 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:47.661 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.661 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.661 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:47.661 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.919 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.919 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:47.919 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.919 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:47.919 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:48.176 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:48.176 "name": "pt2", 00:15:48.176 "aliases": [ 00:15:48.176 "00000000-0000-0000-0000-000000000002" 00:15:48.176 ], 00:15:48.176 "product_name": "passthru", 00:15:48.176 "block_size": 512, 00:15:48.177 "num_blocks": 65536, 00:15:48.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.177 "assigned_rate_limits": { 00:15:48.177 "rw_ios_per_sec": 0, 00:15:48.177 "rw_mbytes_per_sec": 0, 00:15:48.177 "r_mbytes_per_sec": 0, 00:15:48.177 "w_mbytes_per_sec": 0 00:15:48.177 }, 00:15:48.177 "claimed": true, 00:15:48.177 "claim_type": "exclusive_write", 00:15:48.177 "zoned": false, 00:15:48.177 "supported_io_types": { 00:15:48.177 "read": true, 00:15:48.177 "write": true, 00:15:48.177 "unmap": true, 00:15:48.177 "flush": true, 00:15:48.177 "reset": true, 00:15:48.177 "nvme_admin": false, 00:15:48.177 "nvme_io": false, 00:15:48.177 "nvme_io_md": false, 00:15:48.177 "write_zeroes": true, 00:15:48.177 "zcopy": true, 00:15:48.177 "get_zone_info": false, 00:15:48.177 "zone_management": false, 00:15:48.177 "zone_append": false, 00:15:48.177 "compare": false, 00:15:48.177 "compare_and_write": false, 00:15:48.177 "abort": true, 00:15:48.177 "seek_hole": false, 00:15:48.177 "seek_data": false, 00:15:48.177 "copy": true, 00:15:48.177 "nvme_iov_md": false 00:15:48.177 }, 00:15:48.177 "memory_domains": [ 00:15:48.177 { 00:15:48.177 "dma_device_id": "system", 00:15:48.177 "dma_device_type": 1 00:15:48.177 }, 00:15:48.177 { 00:15:48.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.177 "dma_device_type": 2 00:15:48.177 } 00:15:48.177 ], 00:15:48.177 "driver_specific": { 00:15:48.177 "passthru": { 00:15:48.177 "name": "pt2", 00:15:48.177 "base_bdev_name": "malloc2" 00:15:48.177 } 00:15:48.177 } 00:15:48.177 }' 00:15:48.177 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.177 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.177 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:48.177 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.177 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.177 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:48.177 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.177 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.435 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:48.436 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.436 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.436 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:48.436 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:48.436 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:48.436 10:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:48.693 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:48.693 "name": "pt3", 00:15:48.693 "aliases": [ 00:15:48.693 "00000000-0000-0000-0000-000000000003" 00:15:48.693 ], 00:15:48.693 "product_name": "passthru", 00:15:48.693 "block_size": 512, 00:15:48.693 "num_blocks": 65536, 00:15:48.693 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.693 "assigned_rate_limits": { 00:15:48.693 "rw_ios_per_sec": 0, 00:15:48.693 "rw_mbytes_per_sec": 0, 00:15:48.693 "r_mbytes_per_sec": 0, 00:15:48.693 "w_mbytes_per_sec": 0 00:15:48.693 }, 00:15:48.693 "claimed": true, 00:15:48.693 "claim_type": "exclusive_write", 00:15:48.694 "zoned": false, 00:15:48.694 "supported_io_types": { 00:15:48.694 "read": true, 00:15:48.694 "write": true, 00:15:48.694 "unmap": true, 00:15:48.694 "flush": true, 00:15:48.694 "reset": true, 00:15:48.694 "nvme_admin": false, 00:15:48.694 "nvme_io": false, 00:15:48.694 "nvme_io_md": false, 00:15:48.694 "write_zeroes": true, 00:15:48.694 "zcopy": true, 00:15:48.694 "get_zone_info": false, 00:15:48.694 "zone_management": false, 00:15:48.694 "zone_append": false, 00:15:48.694 "compare": false, 00:15:48.694 "compare_and_write": false, 00:15:48.694 "abort": true, 00:15:48.694 "seek_hole": false, 00:15:48.694 "seek_data": false, 00:15:48.694 "copy": true, 00:15:48.694 "nvme_iov_md": false 00:15:48.694 }, 00:15:48.694 "memory_domains": [ 00:15:48.694 { 00:15:48.694 "dma_device_id": "system", 00:15:48.694 "dma_device_type": 1 00:15:48.694 }, 00:15:48.694 { 00:15:48.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.694 "dma_device_type": 2 00:15:48.694 } 00:15:48.694 ], 00:15:48.694 "driver_specific": { 00:15:48.694 "passthru": { 00:15:48.694 "name": "pt3", 00:15:48.694 "base_bdev_name": "malloc3" 00:15:48.694 } 00:15:48.694 } 00:15:48.694 }' 00:15:48.694 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.694 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.694 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:48.694 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.694 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.694 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:48.694 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.694 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.951 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:48.951 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.951 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.951 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:48.951 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:48.951 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:49.209 [2024-07-25 10:30:52.740948] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.209 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' dc4fb4be-d30b-4bfe-a351-b18cf7b712bc '!=' dc4fb4be-d30b-4bfe-a351-b18cf7b712bc ']' 00:15:49.209 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:15:49.209 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:49.209 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:49.209 10:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:49.467 [2024-07-25 10:30:52.989349] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:49.467 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:49.467 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:49.467 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:49.467 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:49.467 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:49.467 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:49.467 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:49.467 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:49.467 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:49.467 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:49.467 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.467 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.725 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:49.725 "name": "raid_bdev1", 00:15:49.725 "uuid": "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc", 00:15:49.725 "strip_size_kb": 0, 00:15:49.725 "state": "online", 00:15:49.725 "raid_level": "raid1", 00:15:49.725 "superblock": true, 00:15:49.725 "num_base_bdevs": 3, 00:15:49.725 "num_base_bdevs_discovered": 2, 00:15:49.725 "num_base_bdevs_operational": 2, 00:15:49.725 "base_bdevs_list": [ 00:15:49.725 { 00:15:49.725 "name": null, 00:15:49.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.725 "is_configured": false, 00:15:49.725 "data_offset": 2048, 00:15:49.725 "data_size": 63488 00:15:49.725 }, 00:15:49.725 { 00:15:49.725 "name": "pt2", 00:15:49.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.725 "is_configured": true, 00:15:49.725 "data_offset": 2048, 00:15:49.725 "data_size": 63488 00:15:49.725 }, 00:15:49.725 { 00:15:49.725 "name": "pt3", 00:15:49.725 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.725 "is_configured": true, 00:15:49.725 "data_offset": 2048, 00:15:49.725 "data_size": 63488 00:15:49.725 } 00:15:49.725 ] 00:15:49.725 }' 00:15:49.725 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:49.725 10:30:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.289 10:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:50.547 [2024-07-25 10:30:54.052176] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.547 [2024-07-25 10:30:54.052208] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.547 [2024-07-25 10:30:54.052289] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.547 [2024-07-25 10:30:54.052363] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.547 [2024-07-25 10:30:54.052389] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7e3e20 name raid_bdev1, state offline 00:15:50.547 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.547 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:15:50.804 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:15:50.804 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:15:50.804 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:15:50.804 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:15:50.804 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:51.061 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:15:51.061 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:15:51.061 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:51.317 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:15:51.317 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:15:51.317 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:15:51.317 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:15:51.317 10:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.317 [2024-07-25 10:30:55.018648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.317 [2024-07-25 10:30:55.018704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.317 [2024-07-25 10:30:55.018727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7e5990 00:15:51.317 [2024-07-25 10:30:55.018741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.317 [2024-07-25 10:30:55.020477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.317 [2024-07-25 10:30:55.020505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.317 [2024-07-25 10:30:55.020589] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:51.317 [2024-07-25 10:30:55.020627] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.317 pt2 00:15:51.575 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:51.575 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:51.575 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:51.575 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:51.575 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:51.575 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:51.575 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.575 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.575 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.575 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.575 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.575 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.832 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.832 "name": "raid_bdev1", 00:15:51.833 "uuid": "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc", 00:15:51.833 "strip_size_kb": 0, 00:15:51.833 "state": "configuring", 00:15:51.833 "raid_level": "raid1", 00:15:51.833 "superblock": true, 00:15:51.833 "num_base_bdevs": 3, 00:15:51.833 "num_base_bdevs_discovered": 1, 00:15:51.833 "num_base_bdevs_operational": 2, 00:15:51.833 "base_bdevs_list": [ 00:15:51.833 { 00:15:51.833 "name": null, 00:15:51.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.833 "is_configured": false, 00:15:51.833 "data_offset": 2048, 00:15:51.833 "data_size": 63488 00:15:51.833 }, 00:15:51.833 { 00:15:51.833 "name": "pt2", 00:15:51.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.833 "is_configured": true, 00:15:51.833 "data_offset": 2048, 00:15:51.833 "data_size": 63488 00:15:51.833 }, 00:15:51.833 { 00:15:51.833 "name": null, 00:15:51.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.833 "is_configured": false, 00:15:51.833 "data_offset": 2048, 00:15:51.833 "data_size": 63488 00:15:51.833 } 00:15:51.833 ] 00:15:51.833 }' 00:15:51.833 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.833 10:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.398 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:15:52.398 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:15:52.398 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:15:52.398 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:52.398 [2024-07-25 10:30:56.065421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:52.398 [2024-07-25 10:30:56.065493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.398 [2024-07-25 10:30:56.065519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7ec4e0 00:15:52.398 [2024-07-25 10:30:56.065532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.398 [2024-07-25 10:30:56.065903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.398 [2024-07-25 10:30:56.065924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:52.398 [2024-07-25 10:30:56.065997] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:52.398 [2024-07-25 10:30:56.066020] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:52.398 [2024-07-25 10:30:56.066132] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x7e40a0 00:15:52.398 [2024-07-25 10:30:56.066146] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:52.398 [2024-07-25 10:30:56.066282] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9867b0 00:15:52.398 [2024-07-25 10:30:56.066409] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x7e40a0 00:15:52.398 [2024-07-25 10:30:56.066422] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x7e40a0 00:15:52.398 [2024-07-25 10:30:56.066514] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.398 pt3 00:15:52.398 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:52.398 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:52.398 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:52.398 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:52.398 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:52.398 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:52.398 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.398 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.398 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.398 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.398 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.398 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.655 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.655 "name": "raid_bdev1", 00:15:52.655 "uuid": "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc", 00:15:52.655 "strip_size_kb": 0, 00:15:52.655 "state": "online", 00:15:52.655 "raid_level": "raid1", 00:15:52.655 "superblock": true, 00:15:52.655 "num_base_bdevs": 3, 00:15:52.655 "num_base_bdevs_discovered": 2, 00:15:52.655 "num_base_bdevs_operational": 2, 00:15:52.655 "base_bdevs_list": [ 00:15:52.655 { 00:15:52.655 "name": null, 00:15:52.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.655 "is_configured": false, 00:15:52.655 "data_offset": 2048, 00:15:52.655 "data_size": 63488 00:15:52.655 }, 00:15:52.655 { 00:15:52.655 "name": "pt2", 00:15:52.655 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.655 "is_configured": true, 00:15:52.655 "data_offset": 2048, 00:15:52.655 "data_size": 63488 00:15:52.655 }, 00:15:52.655 { 00:15:52.655 "name": "pt3", 00:15:52.655 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.655 "is_configured": true, 00:15:52.655 "data_offset": 2048, 00:15:52.655 "data_size": 63488 00:15:52.655 } 00:15:52.655 ] 00:15:52.655 }' 00:15:52.655 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.655 10:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.221 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:53.478 [2024-07-25 10:30:57.088138] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.478 [2024-07-25 10:30:57.088164] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.478 [2024-07-25 10:30:57.088241] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.478 [2024-07-25 10:30:57.088304] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.478 [2024-07-25 10:30:57.088317] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7e40a0 name raid_bdev1, state offline 00:15:53.478 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.478 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:15:53.736 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:15:53.736 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:15:53.736 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:15:53.736 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:15:53.736 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:53.993 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:54.251 [2024-07-25 10:30:57.826031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:54.251 [2024-07-25 10:30:57.826109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.251 [2024-07-25 10:30:57.826139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7ecc70 00:15:54.251 [2024-07-25 10:30:57.826167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.251 [2024-07-25 10:30:57.827661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.251 [2024-07-25 10:30:57.827683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:54.251 [2024-07-25 10:30:57.827779] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:54.251 [2024-07-25 10:30:57.827810] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:54.251 [2024-07-25 10:30:57.827916] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:54.251 [2024-07-25 10:30:57.827947] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.251 [2024-07-25 10:30:57.827961] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7ed560 name raid_bdev1, state configuring 00:15:54.251 [2024-07-25 10:30:57.827985] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:54.251 pt1 00:15:54.251 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:15:54.251 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:54.251 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:54.251 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:54.251 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:54.251 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:54.251 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:54.251 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.251 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.251 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.251 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.251 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.251 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.510 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:54.510 "name": "raid_bdev1", 00:15:54.510 "uuid": "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc", 00:15:54.510 "strip_size_kb": 0, 00:15:54.510 "state": "configuring", 00:15:54.510 "raid_level": "raid1", 00:15:54.510 "superblock": true, 00:15:54.510 "num_base_bdevs": 3, 00:15:54.510 "num_base_bdevs_discovered": 1, 00:15:54.510 "num_base_bdevs_operational": 2, 00:15:54.510 "base_bdevs_list": [ 00:15:54.510 { 00:15:54.510 "name": null, 00:15:54.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.510 "is_configured": false, 00:15:54.510 "data_offset": 2048, 00:15:54.510 "data_size": 63488 00:15:54.510 }, 00:15:54.510 { 00:15:54.510 "name": "pt2", 00:15:54.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.510 "is_configured": true, 00:15:54.510 "data_offset": 2048, 00:15:54.510 "data_size": 63488 00:15:54.510 }, 00:15:54.510 { 00:15:54.510 "name": null, 00:15:54.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.510 "is_configured": false, 00:15:54.510 "data_offset": 2048, 00:15:54.510 "data_size": 63488 00:15:54.510 } 00:15:54.510 ] 00:15:54.510 }' 00:15:54.510 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:54.510 10:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.075 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:15:55.075 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:55.333 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:15:55.333 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:55.591 [2024-07-25 10:30:59.089321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:55.591 [2024-07-25 10:30:59.089395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.591 [2024-07-25 10:30:59.089421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7e6c00 00:15:55.591 [2024-07-25 10:30:59.089437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.591 [2024-07-25 10:30:59.089909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.591 [2024-07-25 10:30:59.089935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:55.591 [2024-07-25 10:30:59.090032] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:55.591 [2024-07-25 10:30:59.090062] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:55.591 [2024-07-25 10:30:59.090201] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x7e6e50 00:15:55.591 [2024-07-25 10:30:59.090218] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.591 [2024-07-25 10:30:59.090393] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x99e490 00:15:55.591 [2024-07-25 10:30:59.090547] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x7e6e50 00:15:55.591 [2024-07-25 10:30:59.090563] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x7e6e50 00:15:55.591 [2024-07-25 10:30:59.090687] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.591 pt3 00:15:55.591 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:55.591 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:55.591 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:55.591 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:55.591 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:55.591 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:55.591 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:55.591 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:55.591 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:55.591 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:55.591 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.591 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.848 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.848 "name": "raid_bdev1", 00:15:55.848 "uuid": "dc4fb4be-d30b-4bfe-a351-b18cf7b712bc", 00:15:55.848 "strip_size_kb": 0, 00:15:55.848 "state": "online", 00:15:55.848 "raid_level": "raid1", 00:15:55.848 "superblock": true, 00:15:55.848 "num_base_bdevs": 3, 00:15:55.848 "num_base_bdevs_discovered": 2, 00:15:55.848 "num_base_bdevs_operational": 2, 00:15:55.848 "base_bdevs_list": [ 00:15:55.848 { 00:15:55.848 "name": null, 00:15:55.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.848 "is_configured": false, 00:15:55.848 "data_offset": 2048, 00:15:55.848 "data_size": 63488 00:15:55.848 }, 00:15:55.848 { 00:15:55.848 "name": "pt2", 00:15:55.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.848 "is_configured": true, 00:15:55.848 "data_offset": 2048, 00:15:55.848 "data_size": 63488 00:15:55.848 }, 00:15:55.848 { 00:15:55.848 "name": "pt3", 00:15:55.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.848 "is_configured": true, 00:15:55.848 "data_offset": 2048, 00:15:55.848 "data_size": 63488 00:15:55.848 } 00:15:55.848 ] 00:15:55.848 }' 00:15:55.848 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.848 10:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.414 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:56.414 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:56.672 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:15:56.672 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:56.672 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:15:56.672 [2024-07-25 10:31:00.377212] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.930 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' dc4fb4be-d30b-4bfe-a351-b18cf7b712bc '!=' dc4fb4be-d30b-4bfe-a351-b18cf7b712bc ']' 00:15:56.930 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2383619 00:15:56.930 10:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2383619 ']' 00:15:56.930 10:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2383619 00:15:56.930 10:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:56.930 10:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.930 10:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2383619 00:15:56.930 10:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:56.930 10:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:56.930 10:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2383619' 00:15:56.930 killing process with pid 2383619 00:15:56.930 10:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2383619 00:15:56.930 [2024-07-25 10:31:00.419557] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.930 10:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2383619 00:15:56.930 [2024-07-25 10:31:00.419651] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.930 [2024-07-25 10:31:00.419732] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.930 [2024-07-25 10:31:00.419748] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7e6e50 name raid_bdev1, state offline 00:15:56.930 [2024-07-25 10:31:00.453171] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.188 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:57.188 00:15:57.188 real 0m21.421s 00:15:57.188 user 0m39.698s 00:15:57.188 sys 0m2.979s 00:15:57.188 10:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.188 10:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.188 ************************************ 00:15:57.188 END TEST raid_superblock_test 00:15:57.188 ************************************ 00:15:57.188 10:31:00 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:15:57.188 10:31:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:57.188 10:31:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.188 10:31:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.188 ************************************ 00:15:57.188 START TEST raid_read_error_test 00:15:57.188 ************************************ 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:57.188 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.wpVFSOWupr 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2386601 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2386601 /var/tmp/spdk-raid.sock 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2386601 ']' 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:57.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.189 10:31:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.189 [2024-07-25 10:31:00.841862] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:15:57.189 [2024-07-25 10:31:00.841927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2386601 ] 00:15:57.446 [2024-07-25 10:31:00.932246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.446 [2024-07-25 10:31:01.063339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.446 [2024-07-25 10:31:01.140233] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.446 [2024-07-25 10:31:01.140270] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.378 10:31:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.378 10:31:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:58.378 10:31:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:58.378 10:31:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:58.378 BaseBdev1_malloc 00:15:58.378 10:31:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:58.635 true 00:15:58.635 10:31:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:58.892 [2024-07-25 10:31:02.501875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:58.892 [2024-07-25 10:31:02.501925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.892 [2024-07-25 10:31:02.501949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20fb250 00:15:58.892 [2024-07-25 10:31:02.501965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.892 [2024-07-25 10:31:02.503600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.892 [2024-07-25 10:31:02.503628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:58.892 BaseBdev1 00:15:58.892 10:31:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:58.892 10:31:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:59.149 BaseBdev2_malloc 00:15:59.149 10:31:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:59.407 true 00:15:59.407 10:31:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:59.665 [2024-07-25 10:31:03.238738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:59.665 [2024-07-25 10:31:03.238786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.665 [2024-07-25 10:31:03.238816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20ea650 00:15:59.665 [2024-07-25 10:31:03.238833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.665 [2024-07-25 10:31:03.240280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.665 [2024-07-25 10:31:03.240307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:59.665 BaseBdev2 00:15:59.665 10:31:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:59.665 10:31:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:59.922 BaseBdev3_malloc 00:15:59.922 10:31:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:00.179 true 00:16:00.179 10:31:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:00.437 [2024-07-25 10:31:03.988292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:00.437 [2024-07-25 10:31:03.988358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.437 [2024-07-25 10:31:03.988400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20e05d0 00:16:00.437 [2024-07-25 10:31:03.988428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.437 [2024-07-25 10:31:03.990264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.437 [2024-07-25 10:31:03.990301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:00.437 BaseBdev3 00:16:00.437 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:00.695 [2024-07-25 10:31:04.232973] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.695 [2024-07-25 10:31:04.234298] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.695 [2024-07-25 10:31:04.234375] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:00.695 [2024-07-25 10:31:04.234619] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f3f6b0 00:16:00.695 [2024-07-25 10:31:04.234636] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:00.695 [2024-07-25 10:31:04.234854] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f3fb00 00:16:00.695 [2024-07-25 10:31:04.235075] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f3f6b0 00:16:00.695 [2024-07-25 10:31:04.235092] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f3f6b0 00:16:00.695 [2024-07-25 10:31:04.235251] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.695 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:00.695 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:00.695 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:00.695 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:00.695 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:00.695 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:00.695 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.695 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.695 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.695 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.695 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.695 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.954 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.954 "name": "raid_bdev1", 00:16:00.954 "uuid": "4f0afa97-b2c0-486a-a078-7c9113733a97", 00:16:00.954 "strip_size_kb": 0, 00:16:00.954 "state": "online", 00:16:00.954 "raid_level": "raid1", 00:16:00.954 "superblock": true, 00:16:00.954 "num_base_bdevs": 3, 00:16:00.954 "num_base_bdevs_discovered": 3, 00:16:00.954 "num_base_bdevs_operational": 3, 00:16:00.954 "base_bdevs_list": [ 00:16:00.954 { 00:16:00.954 "name": "BaseBdev1", 00:16:00.954 "uuid": "c0fe54c3-7302-57a4-a2d0-433391e3a2cc", 00:16:00.954 "is_configured": true, 00:16:00.954 "data_offset": 2048, 00:16:00.954 "data_size": 63488 00:16:00.954 }, 00:16:00.954 { 00:16:00.954 "name": "BaseBdev2", 00:16:00.954 "uuid": "274d77fa-5d0f-5f8b-a5f5-97e9f71474af", 00:16:00.954 "is_configured": true, 00:16:00.954 "data_offset": 2048, 00:16:00.954 "data_size": 63488 00:16:00.954 }, 00:16:00.954 { 00:16:00.954 "name": "BaseBdev3", 00:16:00.954 "uuid": "bfca8796-c60c-5e1d-84a3-d8824aa36b0c", 00:16:00.954 "is_configured": true, 00:16:00.954 "data_offset": 2048, 00:16:00.954 "data_size": 63488 00:16:00.954 } 00:16:00.954 ] 00:16:00.954 }' 00:16:00.954 10:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.954 10:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.533 10:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:01.533 10:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:01.533 [2024-07-25 10:31:05.159798] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f45510 00:16:02.466 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.724 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.982 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.982 "name": "raid_bdev1", 00:16:02.982 "uuid": "4f0afa97-b2c0-486a-a078-7c9113733a97", 00:16:02.982 "strip_size_kb": 0, 00:16:02.982 "state": "online", 00:16:02.983 "raid_level": "raid1", 00:16:02.983 "superblock": true, 00:16:02.983 "num_base_bdevs": 3, 00:16:02.983 "num_base_bdevs_discovered": 3, 00:16:02.983 "num_base_bdevs_operational": 3, 00:16:02.983 "base_bdevs_list": [ 00:16:02.983 { 00:16:02.983 "name": "BaseBdev1", 00:16:02.983 "uuid": "c0fe54c3-7302-57a4-a2d0-433391e3a2cc", 00:16:02.983 "is_configured": true, 00:16:02.983 "data_offset": 2048, 00:16:02.983 "data_size": 63488 00:16:02.983 }, 00:16:02.983 { 00:16:02.983 "name": "BaseBdev2", 00:16:02.983 "uuid": "274d77fa-5d0f-5f8b-a5f5-97e9f71474af", 00:16:02.983 "is_configured": true, 00:16:02.983 "data_offset": 2048, 00:16:02.983 "data_size": 63488 00:16:02.983 }, 00:16:02.983 { 00:16:02.983 "name": "BaseBdev3", 00:16:02.983 "uuid": "bfca8796-c60c-5e1d-84a3-d8824aa36b0c", 00:16:02.983 "is_configured": true, 00:16:02.983 "data_offset": 2048, 00:16:02.983 "data_size": 63488 00:16:02.983 } 00:16:02.983 ] 00:16:02.983 }' 00:16:02.983 10:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.983 10:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.549 10:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:03.807 [2024-07-25 10:31:07.445083] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.807 [2024-07-25 10:31:07.445146] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.807 [2024-07-25 10:31:07.448502] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.807 [2024-07-25 10:31:07.448556] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.807 [2024-07-25 10:31:07.448697] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.807 [2024-07-25 10:31:07.448720] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f3f6b0 name raid_bdev1, state offline 00:16:03.807 0 00:16:03.807 10:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2386601 00:16:03.807 10:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2386601 ']' 00:16:03.807 10:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2386601 00:16:03.807 10:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:16:03.807 10:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:03.807 10:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2386601 00:16:03.807 10:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:03.807 10:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:03.807 10:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2386601' 00:16:03.807 killing process with pid 2386601 00:16:03.807 10:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2386601 00:16:03.807 [2024-07-25 10:31:07.496040] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:03.807 10:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2386601 00:16:04.065 [2024-07-25 10:31:07.526477] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:04.324 10:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.wpVFSOWupr 00:16:04.324 10:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:04.324 10:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:04.324 10:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:04.324 10:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:04.324 10:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:04.324 10:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:04.324 10:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:04.324 00:16:04.324 real 0m7.060s 00:16:04.324 user 0m11.322s 00:16:04.324 sys 0m0.971s 00:16:04.324 10:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:04.324 10:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.324 ************************************ 00:16:04.324 END TEST raid_read_error_test 00:16:04.324 ************************************ 00:16:04.324 10:31:07 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:16:04.324 10:31:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:04.324 10:31:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:04.324 10:31:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:04.324 ************************************ 00:16:04.324 START TEST raid_write_error_test 00:16:04.324 ************************************ 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.87FWekmDG5 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2387499 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2387499 /var/tmp/spdk-raid.sock 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2387499 ']' 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:04.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:04.324 10:31:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.324 [2024-07-25 10:31:07.952578] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:16:04.324 [2024-07-25 10:31:07.952645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387499 ] 00:16:04.582 [2024-07-25 10:31:08.047732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.582 [2024-07-25 10:31:08.155139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.582 [2024-07-25 10:31:08.236771] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.582 [2024-07-25 10:31:08.236807] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.515 10:31:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:05.515 10:31:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:05.515 10:31:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:05.515 10:31:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:05.515 BaseBdev1_malloc 00:16:05.773 10:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:05.773 true 00:16:06.031 10:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:06.031 [2024-07-25 10:31:09.704095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:06.031 [2024-07-25 10:31:09.704192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.031 [2024-07-25 10:31:09.704226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2169250 00:16:06.031 [2024-07-25 10:31:09.704246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.031 [2024-07-25 10:31:09.706025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.031 [2024-07-25 10:31:09.706055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:06.031 BaseBdev1 00:16:06.031 10:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:06.031 10:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:06.289 BaseBdev2_malloc 00:16:06.289 10:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:06.854 true 00:16:06.854 10:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:06.854 [2024-07-25 10:31:10.541924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:06.854 [2024-07-25 10:31:10.541978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.854 [2024-07-25 10:31:10.542001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2158650 00:16:06.854 [2024-07-25 10:31:10.542016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.854 [2024-07-25 10:31:10.543485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.854 [2024-07-25 10:31:10.543513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:06.854 BaseBdev2 00:16:07.112 10:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:07.112 10:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:07.112 BaseBdev3_malloc 00:16:07.112 10:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:07.369 true 00:16:07.369 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:07.625 [2024-07-25 10:31:11.323714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:07.625 [2024-07-25 10:31:11.323791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.625 [2024-07-25 10:31:11.323824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x214e5d0 00:16:07.625 [2024-07-25 10:31:11.323845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.625 [2024-07-25 10:31:11.325553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.625 [2024-07-25 10:31:11.325579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:07.625 BaseBdev3 00:16:07.884 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:07.884 [2024-07-25 10:31:11.580455] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.884 [2024-07-25 10:31:11.581724] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.884 [2024-07-25 10:31:11.581802] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.884 [2024-07-25 10:31:11.582047] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fad6b0 00:16:07.884 [2024-07-25 10:31:11.582066] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:07.884 [2024-07-25 10:31:11.582295] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fadb00 00:16:07.884 [2024-07-25 10:31:11.582491] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fad6b0 00:16:07.884 [2024-07-25 10:31:11.582509] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fad6b0 00:16:07.884 [2024-07-25 10:31:11.582641] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.141 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:08.141 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:08.141 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:08.141 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:08.141 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:08.141 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:08.141 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.141 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.141 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.141 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.141 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.141 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.399 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.399 "name": "raid_bdev1", 00:16:08.399 "uuid": "d09bcb3e-5f91-4c6c-a1f5-1716e11968b9", 00:16:08.399 "strip_size_kb": 0, 00:16:08.399 "state": "online", 00:16:08.399 "raid_level": "raid1", 00:16:08.399 "superblock": true, 00:16:08.399 "num_base_bdevs": 3, 00:16:08.399 "num_base_bdevs_discovered": 3, 00:16:08.399 "num_base_bdevs_operational": 3, 00:16:08.399 "base_bdevs_list": [ 00:16:08.399 { 00:16:08.399 "name": "BaseBdev1", 00:16:08.399 "uuid": "f8d99c4f-c0d1-5cab-9235-756dbd7b51c5", 00:16:08.399 "is_configured": true, 00:16:08.399 "data_offset": 2048, 00:16:08.399 "data_size": 63488 00:16:08.399 }, 00:16:08.399 { 00:16:08.399 "name": "BaseBdev2", 00:16:08.399 "uuid": "1571dead-8e11-5e11-9340-405fadba567e", 00:16:08.399 "is_configured": true, 00:16:08.399 "data_offset": 2048, 00:16:08.399 "data_size": 63488 00:16:08.399 }, 00:16:08.399 { 00:16:08.399 "name": "BaseBdev3", 00:16:08.399 "uuid": "a4e41bb9-cdf7-508c-a18f-f09ee8cc48e7", 00:16:08.399 "is_configured": true, 00:16:08.399 "data_offset": 2048, 00:16:08.399 "data_size": 63488 00:16:08.399 } 00:16:08.399 ] 00:16:08.399 }' 00:16:08.399 10:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.399 10:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.964 10:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:08.964 10:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:08.964 [2024-07-25 10:31:12.575471] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fb3510 00:16:09.901 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:10.161 [2024-07-25 10:31:13.729556] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:10.161 [2024-07-25 10:31:13.729640] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:10.161 [2024-07-25 10:31:13.729862] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1fb3510 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.161 10:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.419 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:10.419 "name": "raid_bdev1", 00:16:10.419 "uuid": "d09bcb3e-5f91-4c6c-a1f5-1716e11968b9", 00:16:10.419 "strip_size_kb": 0, 00:16:10.419 "state": "online", 00:16:10.419 "raid_level": "raid1", 00:16:10.419 "superblock": true, 00:16:10.419 "num_base_bdevs": 3, 00:16:10.419 "num_base_bdevs_discovered": 2, 00:16:10.419 "num_base_bdevs_operational": 2, 00:16:10.419 "base_bdevs_list": [ 00:16:10.419 { 00:16:10.419 "name": null, 00:16:10.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.419 "is_configured": false, 00:16:10.419 "data_offset": 2048, 00:16:10.419 "data_size": 63488 00:16:10.419 }, 00:16:10.419 { 00:16:10.419 "name": "BaseBdev2", 00:16:10.419 "uuid": "1571dead-8e11-5e11-9340-405fadba567e", 00:16:10.419 "is_configured": true, 00:16:10.419 "data_offset": 2048, 00:16:10.419 "data_size": 63488 00:16:10.419 }, 00:16:10.419 { 00:16:10.419 "name": "BaseBdev3", 00:16:10.419 "uuid": "a4e41bb9-cdf7-508c-a18f-f09ee8cc48e7", 00:16:10.419 "is_configured": true, 00:16:10.419 "data_offset": 2048, 00:16:10.419 "data_size": 63488 00:16:10.419 } 00:16:10.419 ] 00:16:10.419 }' 00:16:10.419 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:10.419 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.986 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:11.244 [2024-07-25 10:31:14.805879] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.244 [2024-07-25 10:31:14.805936] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.244 [2024-07-25 10:31:14.808924] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.244 [2024-07-25 10:31:14.808962] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.244 [2024-07-25 10:31:14.809041] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.244 [2024-07-25 10:31:14.809056] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fad6b0 name raid_bdev1, state offline 00:16:11.244 0 00:16:11.244 10:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2387499 00:16:11.244 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2387499 ']' 00:16:11.244 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2387499 00:16:11.244 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:11.244 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:11.244 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2387499 00:16:11.244 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:11.244 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:11.244 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2387499' 00:16:11.244 killing process with pid 2387499 00:16:11.244 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2387499 00:16:11.244 [2024-07-25 10:31:14.856471] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.244 10:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2387499 00:16:11.244 [2024-07-25 10:31:14.883758] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:11.502 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.87FWekmDG5 00:16:11.502 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:11.502 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:11.502 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:11.502 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:11.502 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:11.502 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:11.502 10:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:11.502 00:16:11.502 real 0m7.284s 00:16:11.502 user 0m11.760s 00:16:11.502 sys 0m1.030s 00:16:11.502 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:11.502 10:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.502 ************************************ 00:16:11.502 END TEST raid_write_error_test 00:16:11.502 ************************************ 00:16:11.502 10:31:15 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:16:11.502 10:31:15 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:11.502 10:31:15 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:16:11.502 10:31:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:11.502 10:31:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:11.502 10:31:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:11.760 ************************************ 00:16:11.760 START TEST raid_state_function_test 00:16:11.760 ************************************ 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:11.760 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2388514 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2388514' 00:16:11.761 Process raid pid: 2388514 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2388514 /var/tmp/spdk-raid.sock 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2388514 ']' 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:11.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:11.761 10:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.761 [2024-07-25 10:31:15.286124] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:16:11.761 [2024-07-25 10:31:15.286221] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.761 [2024-07-25 10:31:15.365240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.019 [2024-07-25 10:31:15.479738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.019 [2024-07-25 10:31:15.553324] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:12.019 [2024-07-25 10:31:15.553364] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:12.584 10:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:12.584 10:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:12.584 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:12.842 [2024-07-25 10:31:16.447759] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:12.842 [2024-07-25 10:31:16.447801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:12.842 [2024-07-25 10:31:16.447826] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.842 [2024-07-25 10:31:16.447837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.842 [2024-07-25 10:31:16.447844] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:12.842 [2024-07-25 10:31:16.447854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:12.842 [2024-07-25 10:31:16.447862] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:12.842 [2024-07-25 10:31:16.447872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:12.842 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:12.842 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:12.842 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:12.842 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:12.842 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:12.842 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:12.842 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:12.842 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:12.842 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:12.842 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:12.842 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.842 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.100 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:13.100 "name": "Existed_Raid", 00:16:13.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.100 "strip_size_kb": 64, 00:16:13.100 "state": "configuring", 00:16:13.100 "raid_level": "raid0", 00:16:13.100 "superblock": false, 00:16:13.100 "num_base_bdevs": 4, 00:16:13.100 "num_base_bdevs_discovered": 0, 00:16:13.100 "num_base_bdevs_operational": 4, 00:16:13.100 "base_bdevs_list": [ 00:16:13.100 { 00:16:13.100 "name": "BaseBdev1", 00:16:13.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.100 "is_configured": false, 00:16:13.100 "data_offset": 0, 00:16:13.100 "data_size": 0 00:16:13.100 }, 00:16:13.100 { 00:16:13.100 "name": "BaseBdev2", 00:16:13.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.100 "is_configured": false, 00:16:13.100 "data_offset": 0, 00:16:13.100 "data_size": 0 00:16:13.100 }, 00:16:13.100 { 00:16:13.100 "name": "BaseBdev3", 00:16:13.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.100 "is_configured": false, 00:16:13.100 "data_offset": 0, 00:16:13.100 "data_size": 0 00:16:13.100 }, 00:16:13.100 { 00:16:13.100 "name": "BaseBdev4", 00:16:13.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.100 "is_configured": false, 00:16:13.100 "data_offset": 0, 00:16:13.100 "data_size": 0 00:16:13.100 } 00:16:13.100 ] 00:16:13.100 }' 00:16:13.100 10:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:13.100 10:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.665 10:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:13.922 [2024-07-25 10:31:17.450295] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.922 [2024-07-25 10:31:17.450327] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1679640 name Existed_Raid, state configuring 00:16:13.922 10:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:14.180 [2024-07-25 10:31:17.682896] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.180 [2024-07-25 10:31:17.682926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.180 [2024-07-25 10:31:17.682950] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.180 [2024-07-25 10:31:17.682960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.180 [2024-07-25 10:31:17.682968] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:14.180 [2024-07-25 10:31:17.682977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:14.180 [2024-07-25 10:31:17.682985] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:14.180 [2024-07-25 10:31:17.682994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:14.180 10:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:14.438 [2024-07-25 10:31:17.925700] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.438 BaseBdev1 00:16:14.438 10:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:14.438 10:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:14.438 10:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:14.438 10:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:14.438 10:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:14.438 10:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:14.438 10:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:14.695 10:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:14.953 [ 00:16:14.953 { 00:16:14.953 "name": "BaseBdev1", 00:16:14.953 "aliases": [ 00:16:14.953 "580d0df9-0c5a-4068-b965-7c571f1eb459" 00:16:14.953 ], 00:16:14.953 "product_name": "Malloc disk", 00:16:14.953 "block_size": 512, 00:16:14.953 "num_blocks": 65536, 00:16:14.953 "uuid": "580d0df9-0c5a-4068-b965-7c571f1eb459", 00:16:14.953 "assigned_rate_limits": { 00:16:14.953 "rw_ios_per_sec": 0, 00:16:14.953 "rw_mbytes_per_sec": 0, 00:16:14.953 "r_mbytes_per_sec": 0, 00:16:14.953 "w_mbytes_per_sec": 0 00:16:14.953 }, 00:16:14.953 "claimed": true, 00:16:14.953 "claim_type": "exclusive_write", 00:16:14.953 "zoned": false, 00:16:14.953 "supported_io_types": { 00:16:14.953 "read": true, 00:16:14.953 "write": true, 00:16:14.953 "unmap": true, 00:16:14.953 "flush": true, 00:16:14.953 "reset": true, 00:16:14.953 "nvme_admin": false, 00:16:14.953 "nvme_io": false, 00:16:14.953 "nvme_io_md": false, 00:16:14.953 "write_zeroes": true, 00:16:14.953 "zcopy": true, 00:16:14.953 "get_zone_info": false, 00:16:14.953 "zone_management": false, 00:16:14.953 "zone_append": false, 00:16:14.953 "compare": false, 00:16:14.953 "compare_and_write": false, 00:16:14.953 "abort": true, 00:16:14.953 "seek_hole": false, 00:16:14.953 "seek_data": false, 00:16:14.953 "copy": true, 00:16:14.953 "nvme_iov_md": false 00:16:14.953 }, 00:16:14.953 "memory_domains": [ 00:16:14.953 { 00:16:14.953 "dma_device_id": "system", 00:16:14.953 "dma_device_type": 1 00:16:14.953 }, 00:16:14.953 { 00:16:14.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.953 "dma_device_type": 2 00:16:14.953 } 00:16:14.953 ], 00:16:14.953 "driver_specific": {} 00:16:14.953 } 00:16:14.953 ] 00:16:14.953 10:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:14.953 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:14.953 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:14.953 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:14.953 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:14.953 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:14.953 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:14.953 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.953 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.953 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.953 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.953 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.953 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.211 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.211 "name": "Existed_Raid", 00:16:15.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.211 "strip_size_kb": 64, 00:16:15.211 "state": "configuring", 00:16:15.211 "raid_level": "raid0", 00:16:15.211 "superblock": false, 00:16:15.211 "num_base_bdevs": 4, 00:16:15.211 "num_base_bdevs_discovered": 1, 00:16:15.211 "num_base_bdevs_operational": 4, 00:16:15.211 "base_bdevs_list": [ 00:16:15.211 { 00:16:15.211 "name": "BaseBdev1", 00:16:15.211 "uuid": "580d0df9-0c5a-4068-b965-7c571f1eb459", 00:16:15.211 "is_configured": true, 00:16:15.211 "data_offset": 0, 00:16:15.211 "data_size": 65536 00:16:15.211 }, 00:16:15.211 { 00:16:15.211 "name": "BaseBdev2", 00:16:15.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.211 "is_configured": false, 00:16:15.211 "data_offset": 0, 00:16:15.211 "data_size": 0 00:16:15.211 }, 00:16:15.211 { 00:16:15.211 "name": "BaseBdev3", 00:16:15.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.211 "is_configured": false, 00:16:15.211 "data_offset": 0, 00:16:15.211 "data_size": 0 00:16:15.211 }, 00:16:15.211 { 00:16:15.211 "name": "BaseBdev4", 00:16:15.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.211 "is_configured": false, 00:16:15.211 "data_offset": 0, 00:16:15.211 "data_size": 0 00:16:15.211 } 00:16:15.211 ] 00:16:15.212 }' 00:16:15.212 10:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.212 10:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.777 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:15.777 [2024-07-25 10:31:19.453836] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:15.777 [2024-07-25 10:31:19.453886] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1678e50 name Existed_Raid, state configuring 00:16:15.777 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:16.035 [2024-07-25 10:31:19.698480] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.035 [2024-07-25 10:31:19.699695] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.035 [2024-07-25 10:31:19.699722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.035 [2024-07-25 10:31:19.699747] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:16.035 [2024-07-25 10:31:19.699758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:16.035 [2024-07-25 10:31:19.699765] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:16.035 [2024-07-25 10:31:19.699775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.035 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.293 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:16.293 "name": "Existed_Raid", 00:16:16.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.293 "strip_size_kb": 64, 00:16:16.293 "state": "configuring", 00:16:16.293 "raid_level": "raid0", 00:16:16.293 "superblock": false, 00:16:16.293 "num_base_bdevs": 4, 00:16:16.293 "num_base_bdevs_discovered": 1, 00:16:16.293 "num_base_bdevs_operational": 4, 00:16:16.293 "base_bdevs_list": [ 00:16:16.293 { 00:16:16.293 "name": "BaseBdev1", 00:16:16.293 "uuid": "580d0df9-0c5a-4068-b965-7c571f1eb459", 00:16:16.293 "is_configured": true, 00:16:16.293 "data_offset": 0, 00:16:16.293 "data_size": 65536 00:16:16.293 }, 00:16:16.293 { 00:16:16.293 "name": "BaseBdev2", 00:16:16.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.293 "is_configured": false, 00:16:16.293 "data_offset": 0, 00:16:16.293 "data_size": 0 00:16:16.293 }, 00:16:16.293 { 00:16:16.293 "name": "BaseBdev3", 00:16:16.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.293 "is_configured": false, 00:16:16.293 "data_offset": 0, 00:16:16.293 "data_size": 0 00:16:16.293 }, 00:16:16.293 { 00:16:16.293 "name": "BaseBdev4", 00:16:16.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.293 "is_configured": false, 00:16:16.293 "data_offset": 0, 00:16:16.293 "data_size": 0 00:16:16.293 } 00:16:16.293 ] 00:16:16.293 }' 00:16:16.293 10:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:16.293 10:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.859 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:17.118 [2024-07-25 10:31:20.757997] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.118 BaseBdev2 00:16:17.118 10:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:17.118 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:17.118 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:17.118 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:17.118 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:17.118 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:17.118 10:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:17.379 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:17.636 [ 00:16:17.636 { 00:16:17.636 "name": "BaseBdev2", 00:16:17.636 "aliases": [ 00:16:17.636 "587bdec8-1035-4ba8-8a86-a3cdafcdb848" 00:16:17.636 ], 00:16:17.636 "product_name": "Malloc disk", 00:16:17.636 "block_size": 512, 00:16:17.636 "num_blocks": 65536, 00:16:17.636 "uuid": "587bdec8-1035-4ba8-8a86-a3cdafcdb848", 00:16:17.636 "assigned_rate_limits": { 00:16:17.636 "rw_ios_per_sec": 0, 00:16:17.636 "rw_mbytes_per_sec": 0, 00:16:17.636 "r_mbytes_per_sec": 0, 00:16:17.636 "w_mbytes_per_sec": 0 00:16:17.636 }, 00:16:17.636 "claimed": true, 00:16:17.636 "claim_type": "exclusive_write", 00:16:17.636 "zoned": false, 00:16:17.636 "supported_io_types": { 00:16:17.636 "read": true, 00:16:17.636 "write": true, 00:16:17.636 "unmap": true, 00:16:17.636 "flush": true, 00:16:17.636 "reset": true, 00:16:17.636 "nvme_admin": false, 00:16:17.637 "nvme_io": false, 00:16:17.637 "nvme_io_md": false, 00:16:17.637 "write_zeroes": true, 00:16:17.637 "zcopy": true, 00:16:17.637 "get_zone_info": false, 00:16:17.637 "zone_management": false, 00:16:17.637 "zone_append": false, 00:16:17.637 "compare": false, 00:16:17.637 "compare_and_write": false, 00:16:17.637 "abort": true, 00:16:17.637 "seek_hole": false, 00:16:17.637 "seek_data": false, 00:16:17.637 "copy": true, 00:16:17.637 "nvme_iov_md": false 00:16:17.637 }, 00:16:17.637 "memory_domains": [ 00:16:17.637 { 00:16:17.637 "dma_device_id": "system", 00:16:17.637 "dma_device_type": 1 00:16:17.637 }, 00:16:17.637 { 00:16:17.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.637 "dma_device_type": 2 00:16:17.637 } 00:16:17.637 ], 00:16:17.637 "driver_specific": {} 00:16:17.637 } 00:16:17.637 ] 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.637 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.894 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.894 "name": "Existed_Raid", 00:16:17.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.894 "strip_size_kb": 64, 00:16:17.894 "state": "configuring", 00:16:17.894 "raid_level": "raid0", 00:16:17.894 "superblock": false, 00:16:17.894 "num_base_bdevs": 4, 00:16:17.894 "num_base_bdevs_discovered": 2, 00:16:17.894 "num_base_bdevs_operational": 4, 00:16:17.894 "base_bdevs_list": [ 00:16:17.894 { 00:16:17.894 "name": "BaseBdev1", 00:16:17.894 "uuid": "580d0df9-0c5a-4068-b965-7c571f1eb459", 00:16:17.894 "is_configured": true, 00:16:17.894 "data_offset": 0, 00:16:17.894 "data_size": 65536 00:16:17.894 }, 00:16:17.895 { 00:16:17.895 "name": "BaseBdev2", 00:16:17.895 "uuid": "587bdec8-1035-4ba8-8a86-a3cdafcdb848", 00:16:17.895 "is_configured": true, 00:16:17.895 "data_offset": 0, 00:16:17.895 "data_size": 65536 00:16:17.895 }, 00:16:17.895 { 00:16:17.895 "name": "BaseBdev3", 00:16:17.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.895 "is_configured": false, 00:16:17.895 "data_offset": 0, 00:16:17.895 "data_size": 0 00:16:17.895 }, 00:16:17.895 { 00:16:17.895 "name": "BaseBdev4", 00:16:17.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.895 "is_configured": false, 00:16:17.895 "data_offset": 0, 00:16:17.895 "data_size": 0 00:16:17.895 } 00:16:17.895 ] 00:16:17.895 }' 00:16:17.895 10:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.895 10:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.498 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:18.757 [2024-07-25 10:31:22.286788] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.757 BaseBdev3 00:16:18.757 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:18.757 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:18.757 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:18.757 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:18.757 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:18.757 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:18.757 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.014 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:19.273 [ 00:16:19.273 { 00:16:19.273 "name": "BaseBdev3", 00:16:19.273 "aliases": [ 00:16:19.273 "b9cd9540-91bd-4e17-a0dc-2be51b30978e" 00:16:19.273 ], 00:16:19.273 "product_name": "Malloc disk", 00:16:19.273 "block_size": 512, 00:16:19.273 "num_blocks": 65536, 00:16:19.273 "uuid": "b9cd9540-91bd-4e17-a0dc-2be51b30978e", 00:16:19.273 "assigned_rate_limits": { 00:16:19.273 "rw_ios_per_sec": 0, 00:16:19.273 "rw_mbytes_per_sec": 0, 00:16:19.273 "r_mbytes_per_sec": 0, 00:16:19.273 "w_mbytes_per_sec": 0 00:16:19.273 }, 00:16:19.273 "claimed": true, 00:16:19.273 "claim_type": "exclusive_write", 00:16:19.273 "zoned": false, 00:16:19.273 "supported_io_types": { 00:16:19.273 "read": true, 00:16:19.273 "write": true, 00:16:19.273 "unmap": true, 00:16:19.273 "flush": true, 00:16:19.273 "reset": true, 00:16:19.273 "nvme_admin": false, 00:16:19.273 "nvme_io": false, 00:16:19.273 "nvme_io_md": false, 00:16:19.273 "write_zeroes": true, 00:16:19.273 "zcopy": true, 00:16:19.273 "get_zone_info": false, 00:16:19.273 "zone_management": false, 00:16:19.273 "zone_append": false, 00:16:19.273 "compare": false, 00:16:19.273 "compare_and_write": false, 00:16:19.273 "abort": true, 00:16:19.273 "seek_hole": false, 00:16:19.274 "seek_data": false, 00:16:19.274 "copy": true, 00:16:19.274 "nvme_iov_md": false 00:16:19.274 }, 00:16:19.274 "memory_domains": [ 00:16:19.274 { 00:16:19.274 "dma_device_id": "system", 00:16:19.274 "dma_device_type": 1 00:16:19.274 }, 00:16:19.274 { 00:16:19.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.274 "dma_device_type": 2 00:16:19.274 } 00:16:19.274 ], 00:16:19.274 "driver_specific": {} 00:16:19.274 } 00:16:19.274 ] 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.274 10:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.532 10:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:19.532 "name": "Existed_Raid", 00:16:19.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.532 "strip_size_kb": 64, 00:16:19.532 "state": "configuring", 00:16:19.532 "raid_level": "raid0", 00:16:19.532 "superblock": false, 00:16:19.533 "num_base_bdevs": 4, 00:16:19.533 "num_base_bdevs_discovered": 3, 00:16:19.533 "num_base_bdevs_operational": 4, 00:16:19.533 "base_bdevs_list": [ 00:16:19.533 { 00:16:19.533 "name": "BaseBdev1", 00:16:19.533 "uuid": "580d0df9-0c5a-4068-b965-7c571f1eb459", 00:16:19.533 "is_configured": true, 00:16:19.533 "data_offset": 0, 00:16:19.533 "data_size": 65536 00:16:19.533 }, 00:16:19.533 { 00:16:19.533 "name": "BaseBdev2", 00:16:19.533 "uuid": "587bdec8-1035-4ba8-8a86-a3cdafcdb848", 00:16:19.533 "is_configured": true, 00:16:19.533 "data_offset": 0, 00:16:19.533 "data_size": 65536 00:16:19.533 }, 00:16:19.533 { 00:16:19.533 "name": "BaseBdev3", 00:16:19.533 "uuid": "b9cd9540-91bd-4e17-a0dc-2be51b30978e", 00:16:19.533 "is_configured": true, 00:16:19.533 "data_offset": 0, 00:16:19.533 "data_size": 65536 00:16:19.533 }, 00:16:19.533 { 00:16:19.533 "name": "BaseBdev4", 00:16:19.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.533 "is_configured": false, 00:16:19.533 "data_offset": 0, 00:16:19.533 "data_size": 0 00:16:19.533 } 00:16:19.533 ] 00:16:19.533 }' 00:16:19.533 10:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:19.533 10:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.097 10:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:20.355 [2024-07-25 10:31:23.985419] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:20.355 [2024-07-25 10:31:23.985456] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1679cb0 00:16:20.355 [2024-07-25 10:31:23.985481] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:20.355 [2024-07-25 10:31:23.985676] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1822f00 00:16:20.355 [2024-07-25 10:31:23.985835] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1679cb0 00:16:20.355 [2024-07-25 10:31:23.985851] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1679cb0 00:16:20.355 [2024-07-25 10:31:23.986079] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.355 BaseBdev4 00:16:20.355 10:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:20.355 10:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:20.356 10:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:20.356 10:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:20.356 10:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:20.356 10:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:20.356 10:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:20.613 10:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:20.871 [ 00:16:20.871 { 00:16:20.871 "name": "BaseBdev4", 00:16:20.871 "aliases": [ 00:16:20.871 "5c0093c6-18a2-49bb-a039-81e7e990c09e" 00:16:20.871 ], 00:16:20.871 "product_name": "Malloc disk", 00:16:20.871 "block_size": 512, 00:16:20.871 "num_blocks": 65536, 00:16:20.871 "uuid": "5c0093c6-18a2-49bb-a039-81e7e990c09e", 00:16:20.871 "assigned_rate_limits": { 00:16:20.871 "rw_ios_per_sec": 0, 00:16:20.871 "rw_mbytes_per_sec": 0, 00:16:20.871 "r_mbytes_per_sec": 0, 00:16:20.871 "w_mbytes_per_sec": 0 00:16:20.871 }, 00:16:20.871 "claimed": true, 00:16:20.871 "claim_type": "exclusive_write", 00:16:20.871 "zoned": false, 00:16:20.871 "supported_io_types": { 00:16:20.871 "read": true, 00:16:20.871 "write": true, 00:16:20.871 "unmap": true, 00:16:20.871 "flush": true, 00:16:20.871 "reset": true, 00:16:20.871 "nvme_admin": false, 00:16:20.871 "nvme_io": false, 00:16:20.871 "nvme_io_md": false, 00:16:20.871 "write_zeroes": true, 00:16:20.871 "zcopy": true, 00:16:20.871 "get_zone_info": false, 00:16:20.871 "zone_management": false, 00:16:20.871 "zone_append": false, 00:16:20.871 "compare": false, 00:16:20.871 "compare_and_write": false, 00:16:20.871 "abort": true, 00:16:20.871 "seek_hole": false, 00:16:20.871 "seek_data": false, 00:16:20.871 "copy": true, 00:16:20.871 "nvme_iov_md": false 00:16:20.871 }, 00:16:20.871 "memory_domains": [ 00:16:20.871 { 00:16:20.871 "dma_device_id": "system", 00:16:20.871 "dma_device_type": 1 00:16:20.871 }, 00:16:20.871 { 00:16:20.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.871 "dma_device_type": 2 00:16:20.871 } 00:16:20.871 ], 00:16:20.871 "driver_specific": {} 00:16:20.871 } 00:16:20.871 ] 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.871 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.129 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.129 "name": "Existed_Raid", 00:16:21.129 "uuid": "f04e7802-4c0b-4f89-8781-4b16d822f3a3", 00:16:21.129 "strip_size_kb": 64, 00:16:21.129 "state": "online", 00:16:21.129 "raid_level": "raid0", 00:16:21.129 "superblock": false, 00:16:21.129 "num_base_bdevs": 4, 00:16:21.129 "num_base_bdevs_discovered": 4, 00:16:21.129 "num_base_bdevs_operational": 4, 00:16:21.129 "base_bdevs_list": [ 00:16:21.129 { 00:16:21.129 "name": "BaseBdev1", 00:16:21.129 "uuid": "580d0df9-0c5a-4068-b965-7c571f1eb459", 00:16:21.129 "is_configured": true, 00:16:21.129 "data_offset": 0, 00:16:21.129 "data_size": 65536 00:16:21.129 }, 00:16:21.129 { 00:16:21.129 "name": "BaseBdev2", 00:16:21.129 "uuid": "587bdec8-1035-4ba8-8a86-a3cdafcdb848", 00:16:21.129 "is_configured": true, 00:16:21.129 "data_offset": 0, 00:16:21.129 "data_size": 65536 00:16:21.129 }, 00:16:21.129 { 00:16:21.129 "name": "BaseBdev3", 00:16:21.129 "uuid": "b9cd9540-91bd-4e17-a0dc-2be51b30978e", 00:16:21.129 "is_configured": true, 00:16:21.129 "data_offset": 0, 00:16:21.129 "data_size": 65536 00:16:21.129 }, 00:16:21.129 { 00:16:21.129 "name": "BaseBdev4", 00:16:21.129 "uuid": "5c0093c6-18a2-49bb-a039-81e7e990c09e", 00:16:21.129 "is_configured": true, 00:16:21.129 "data_offset": 0, 00:16:21.129 "data_size": 65536 00:16:21.129 } 00:16:21.129 ] 00:16:21.129 }' 00:16:21.129 10:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.129 10:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.696 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:21.696 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:21.696 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:21.696 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:21.696 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:21.696 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:21.696 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:21.696 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:21.954 [2024-07-25 10:31:25.485829] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.954 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:21.954 "name": "Existed_Raid", 00:16:21.954 "aliases": [ 00:16:21.954 "f04e7802-4c0b-4f89-8781-4b16d822f3a3" 00:16:21.954 ], 00:16:21.954 "product_name": "Raid Volume", 00:16:21.954 "block_size": 512, 00:16:21.954 "num_blocks": 262144, 00:16:21.954 "uuid": "f04e7802-4c0b-4f89-8781-4b16d822f3a3", 00:16:21.954 "assigned_rate_limits": { 00:16:21.954 "rw_ios_per_sec": 0, 00:16:21.954 "rw_mbytes_per_sec": 0, 00:16:21.954 "r_mbytes_per_sec": 0, 00:16:21.954 "w_mbytes_per_sec": 0 00:16:21.954 }, 00:16:21.954 "claimed": false, 00:16:21.954 "zoned": false, 00:16:21.954 "supported_io_types": { 00:16:21.954 "read": true, 00:16:21.954 "write": true, 00:16:21.954 "unmap": true, 00:16:21.954 "flush": true, 00:16:21.954 "reset": true, 00:16:21.954 "nvme_admin": false, 00:16:21.954 "nvme_io": false, 00:16:21.954 "nvme_io_md": false, 00:16:21.954 "write_zeroes": true, 00:16:21.954 "zcopy": false, 00:16:21.954 "get_zone_info": false, 00:16:21.954 "zone_management": false, 00:16:21.954 "zone_append": false, 00:16:21.954 "compare": false, 00:16:21.954 "compare_and_write": false, 00:16:21.954 "abort": false, 00:16:21.954 "seek_hole": false, 00:16:21.954 "seek_data": false, 00:16:21.954 "copy": false, 00:16:21.954 "nvme_iov_md": false 00:16:21.954 }, 00:16:21.954 "memory_domains": [ 00:16:21.954 { 00:16:21.954 "dma_device_id": "system", 00:16:21.954 "dma_device_type": 1 00:16:21.954 }, 00:16:21.954 { 00:16:21.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.954 "dma_device_type": 2 00:16:21.954 }, 00:16:21.954 { 00:16:21.954 "dma_device_id": "system", 00:16:21.954 "dma_device_type": 1 00:16:21.954 }, 00:16:21.954 { 00:16:21.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.954 "dma_device_type": 2 00:16:21.954 }, 00:16:21.954 { 00:16:21.954 "dma_device_id": "system", 00:16:21.954 "dma_device_type": 1 00:16:21.954 }, 00:16:21.954 { 00:16:21.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.954 "dma_device_type": 2 00:16:21.954 }, 00:16:21.954 { 00:16:21.955 "dma_device_id": "system", 00:16:21.955 "dma_device_type": 1 00:16:21.955 }, 00:16:21.955 { 00:16:21.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.955 "dma_device_type": 2 00:16:21.955 } 00:16:21.955 ], 00:16:21.955 "driver_specific": { 00:16:21.955 "raid": { 00:16:21.955 "uuid": "f04e7802-4c0b-4f89-8781-4b16d822f3a3", 00:16:21.955 "strip_size_kb": 64, 00:16:21.955 "state": "online", 00:16:21.955 "raid_level": "raid0", 00:16:21.955 "superblock": false, 00:16:21.955 "num_base_bdevs": 4, 00:16:21.955 "num_base_bdevs_discovered": 4, 00:16:21.955 "num_base_bdevs_operational": 4, 00:16:21.955 "base_bdevs_list": [ 00:16:21.955 { 00:16:21.955 "name": "BaseBdev1", 00:16:21.955 "uuid": "580d0df9-0c5a-4068-b965-7c571f1eb459", 00:16:21.955 "is_configured": true, 00:16:21.955 "data_offset": 0, 00:16:21.955 "data_size": 65536 00:16:21.955 }, 00:16:21.955 { 00:16:21.955 "name": "BaseBdev2", 00:16:21.955 "uuid": "587bdec8-1035-4ba8-8a86-a3cdafcdb848", 00:16:21.955 "is_configured": true, 00:16:21.955 "data_offset": 0, 00:16:21.955 "data_size": 65536 00:16:21.955 }, 00:16:21.955 { 00:16:21.955 "name": "BaseBdev3", 00:16:21.955 "uuid": "b9cd9540-91bd-4e17-a0dc-2be51b30978e", 00:16:21.955 "is_configured": true, 00:16:21.955 "data_offset": 0, 00:16:21.955 "data_size": 65536 00:16:21.955 }, 00:16:21.955 { 00:16:21.955 "name": "BaseBdev4", 00:16:21.955 "uuid": "5c0093c6-18a2-49bb-a039-81e7e990c09e", 00:16:21.955 "is_configured": true, 00:16:21.955 "data_offset": 0, 00:16:21.955 "data_size": 65536 00:16:21.955 } 00:16:21.955 ] 00:16:21.955 } 00:16:21.955 } 00:16:21.955 }' 00:16:21.955 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:21.955 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:21.955 BaseBdev2 00:16:21.955 BaseBdev3 00:16:21.955 BaseBdev4' 00:16:21.955 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:21.955 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:21.955 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:22.213 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:22.213 "name": "BaseBdev1", 00:16:22.213 "aliases": [ 00:16:22.213 "580d0df9-0c5a-4068-b965-7c571f1eb459" 00:16:22.213 ], 00:16:22.213 "product_name": "Malloc disk", 00:16:22.213 "block_size": 512, 00:16:22.213 "num_blocks": 65536, 00:16:22.213 "uuid": "580d0df9-0c5a-4068-b965-7c571f1eb459", 00:16:22.213 "assigned_rate_limits": { 00:16:22.213 "rw_ios_per_sec": 0, 00:16:22.213 "rw_mbytes_per_sec": 0, 00:16:22.213 "r_mbytes_per_sec": 0, 00:16:22.213 "w_mbytes_per_sec": 0 00:16:22.213 }, 00:16:22.213 "claimed": true, 00:16:22.213 "claim_type": "exclusive_write", 00:16:22.213 "zoned": false, 00:16:22.213 "supported_io_types": { 00:16:22.213 "read": true, 00:16:22.213 "write": true, 00:16:22.213 "unmap": true, 00:16:22.213 "flush": true, 00:16:22.213 "reset": true, 00:16:22.213 "nvme_admin": false, 00:16:22.213 "nvme_io": false, 00:16:22.213 "nvme_io_md": false, 00:16:22.213 "write_zeroes": true, 00:16:22.213 "zcopy": true, 00:16:22.213 "get_zone_info": false, 00:16:22.213 "zone_management": false, 00:16:22.213 "zone_append": false, 00:16:22.213 "compare": false, 00:16:22.213 "compare_and_write": false, 00:16:22.213 "abort": true, 00:16:22.213 "seek_hole": false, 00:16:22.213 "seek_data": false, 00:16:22.213 "copy": true, 00:16:22.213 "nvme_iov_md": false 00:16:22.213 }, 00:16:22.213 "memory_domains": [ 00:16:22.213 { 00:16:22.213 "dma_device_id": "system", 00:16:22.213 "dma_device_type": 1 00:16:22.213 }, 00:16:22.213 { 00:16:22.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.213 "dma_device_type": 2 00:16:22.213 } 00:16:22.213 ], 00:16:22.213 "driver_specific": {} 00:16:22.213 }' 00:16:22.213 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.213 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.213 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:22.213 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.213 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.470 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:22.470 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:22.470 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:22.470 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:22.471 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:22.471 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:22.471 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:22.471 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:22.471 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:22.471 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:22.728 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:22.728 "name": "BaseBdev2", 00:16:22.728 "aliases": [ 00:16:22.728 "587bdec8-1035-4ba8-8a86-a3cdafcdb848" 00:16:22.728 ], 00:16:22.728 "product_name": "Malloc disk", 00:16:22.728 "block_size": 512, 00:16:22.728 "num_blocks": 65536, 00:16:22.728 "uuid": "587bdec8-1035-4ba8-8a86-a3cdafcdb848", 00:16:22.728 "assigned_rate_limits": { 00:16:22.728 "rw_ios_per_sec": 0, 00:16:22.728 "rw_mbytes_per_sec": 0, 00:16:22.728 "r_mbytes_per_sec": 0, 00:16:22.728 "w_mbytes_per_sec": 0 00:16:22.728 }, 00:16:22.728 "claimed": true, 00:16:22.728 "claim_type": "exclusive_write", 00:16:22.728 "zoned": false, 00:16:22.728 "supported_io_types": { 00:16:22.728 "read": true, 00:16:22.728 "write": true, 00:16:22.728 "unmap": true, 00:16:22.728 "flush": true, 00:16:22.728 "reset": true, 00:16:22.728 "nvme_admin": false, 00:16:22.728 "nvme_io": false, 00:16:22.728 "nvme_io_md": false, 00:16:22.728 "write_zeroes": true, 00:16:22.728 "zcopy": true, 00:16:22.728 "get_zone_info": false, 00:16:22.728 "zone_management": false, 00:16:22.728 "zone_append": false, 00:16:22.728 "compare": false, 00:16:22.728 "compare_and_write": false, 00:16:22.728 "abort": true, 00:16:22.728 "seek_hole": false, 00:16:22.728 "seek_data": false, 00:16:22.728 "copy": true, 00:16:22.729 "nvme_iov_md": false 00:16:22.729 }, 00:16:22.729 "memory_domains": [ 00:16:22.729 { 00:16:22.729 "dma_device_id": "system", 00:16:22.729 "dma_device_type": 1 00:16:22.729 }, 00:16:22.729 { 00:16:22.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.729 "dma_device_type": 2 00:16:22.729 } 00:16:22.729 ], 00:16:22.729 "driver_specific": {} 00:16:22.729 }' 00:16:22.729 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.729 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.729 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:22.729 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.031 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.031 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:23.031 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.031 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.031 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.031 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.031 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.031 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.031 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.031 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:23.031 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.289 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.289 "name": "BaseBdev3", 00:16:23.289 "aliases": [ 00:16:23.290 "b9cd9540-91bd-4e17-a0dc-2be51b30978e" 00:16:23.290 ], 00:16:23.290 "product_name": "Malloc disk", 00:16:23.290 "block_size": 512, 00:16:23.290 "num_blocks": 65536, 00:16:23.290 "uuid": "b9cd9540-91bd-4e17-a0dc-2be51b30978e", 00:16:23.290 "assigned_rate_limits": { 00:16:23.290 "rw_ios_per_sec": 0, 00:16:23.290 "rw_mbytes_per_sec": 0, 00:16:23.290 "r_mbytes_per_sec": 0, 00:16:23.290 "w_mbytes_per_sec": 0 00:16:23.290 }, 00:16:23.290 "claimed": true, 00:16:23.290 "claim_type": "exclusive_write", 00:16:23.290 "zoned": false, 00:16:23.290 "supported_io_types": { 00:16:23.290 "read": true, 00:16:23.290 "write": true, 00:16:23.290 "unmap": true, 00:16:23.290 "flush": true, 00:16:23.290 "reset": true, 00:16:23.290 "nvme_admin": false, 00:16:23.290 "nvme_io": false, 00:16:23.290 "nvme_io_md": false, 00:16:23.290 "write_zeroes": true, 00:16:23.290 "zcopy": true, 00:16:23.290 "get_zone_info": false, 00:16:23.290 "zone_management": false, 00:16:23.290 "zone_append": false, 00:16:23.290 "compare": false, 00:16:23.290 "compare_and_write": false, 00:16:23.290 "abort": true, 00:16:23.290 "seek_hole": false, 00:16:23.290 "seek_data": false, 00:16:23.290 "copy": true, 00:16:23.290 "nvme_iov_md": false 00:16:23.290 }, 00:16:23.290 "memory_domains": [ 00:16:23.290 { 00:16:23.290 "dma_device_id": "system", 00:16:23.290 "dma_device_type": 1 00:16:23.290 }, 00:16:23.290 { 00:16:23.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.290 "dma_device_type": 2 00:16:23.290 } 00:16:23.290 ], 00:16:23.290 "driver_specific": {} 00:16:23.290 }' 00:16:23.290 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.290 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.290 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:23.290 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.290 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.548 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:23.548 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.548 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.548 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.548 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.548 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.548 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.548 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.548 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:23.548 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.806 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.806 "name": "BaseBdev4", 00:16:23.806 "aliases": [ 00:16:23.806 "5c0093c6-18a2-49bb-a039-81e7e990c09e" 00:16:23.806 ], 00:16:23.806 "product_name": "Malloc disk", 00:16:23.806 "block_size": 512, 00:16:23.806 "num_blocks": 65536, 00:16:23.806 "uuid": "5c0093c6-18a2-49bb-a039-81e7e990c09e", 00:16:23.806 "assigned_rate_limits": { 00:16:23.806 "rw_ios_per_sec": 0, 00:16:23.806 "rw_mbytes_per_sec": 0, 00:16:23.806 "r_mbytes_per_sec": 0, 00:16:23.806 "w_mbytes_per_sec": 0 00:16:23.806 }, 00:16:23.806 "claimed": true, 00:16:23.806 "claim_type": "exclusive_write", 00:16:23.806 "zoned": false, 00:16:23.806 "supported_io_types": { 00:16:23.806 "read": true, 00:16:23.806 "write": true, 00:16:23.806 "unmap": true, 00:16:23.806 "flush": true, 00:16:23.806 "reset": true, 00:16:23.806 "nvme_admin": false, 00:16:23.806 "nvme_io": false, 00:16:23.806 "nvme_io_md": false, 00:16:23.806 "write_zeroes": true, 00:16:23.806 "zcopy": true, 00:16:23.806 "get_zone_info": false, 00:16:23.806 "zone_management": false, 00:16:23.806 "zone_append": false, 00:16:23.806 "compare": false, 00:16:23.806 "compare_and_write": false, 00:16:23.806 "abort": true, 00:16:23.806 "seek_hole": false, 00:16:23.806 "seek_data": false, 00:16:23.806 "copy": true, 00:16:23.806 "nvme_iov_md": false 00:16:23.806 }, 00:16:23.806 "memory_domains": [ 00:16:23.806 { 00:16:23.806 "dma_device_id": "system", 00:16:23.806 "dma_device_type": 1 00:16:23.806 }, 00:16:23.806 { 00:16:23.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.806 "dma_device_type": 2 00:16:23.806 } 00:16:23.806 ], 00:16:23.806 "driver_specific": {} 00:16:23.806 }' 00:16:23.806 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.806 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.806 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:23.807 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.064 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.064 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:24.064 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.064 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.064 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:24.064 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.064 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.064 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:24.064 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:24.322 [2024-07-25 10:31:27.980237] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.322 [2024-07-25 10:31:27.980265] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.322 [2024-07-25 10:31:27.980324] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.322 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:24.322 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:24.322 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:24.322 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:24.322 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:24.323 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:24.323 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:24.323 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:24.323 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:24.323 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:24.323 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:24.323 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.323 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.323 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.323 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.323 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.323 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.580 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:24.580 "name": "Existed_Raid", 00:16:24.580 "uuid": "f04e7802-4c0b-4f89-8781-4b16d822f3a3", 00:16:24.580 "strip_size_kb": 64, 00:16:24.580 "state": "offline", 00:16:24.580 "raid_level": "raid0", 00:16:24.580 "superblock": false, 00:16:24.581 "num_base_bdevs": 4, 00:16:24.581 "num_base_bdevs_discovered": 3, 00:16:24.581 "num_base_bdevs_operational": 3, 00:16:24.581 "base_bdevs_list": [ 00:16:24.581 { 00:16:24.581 "name": null, 00:16:24.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.581 "is_configured": false, 00:16:24.581 "data_offset": 0, 00:16:24.581 "data_size": 65536 00:16:24.581 }, 00:16:24.581 { 00:16:24.581 "name": "BaseBdev2", 00:16:24.581 "uuid": "587bdec8-1035-4ba8-8a86-a3cdafcdb848", 00:16:24.581 "is_configured": true, 00:16:24.581 "data_offset": 0, 00:16:24.581 "data_size": 65536 00:16:24.581 }, 00:16:24.581 { 00:16:24.581 "name": "BaseBdev3", 00:16:24.581 "uuid": "b9cd9540-91bd-4e17-a0dc-2be51b30978e", 00:16:24.581 "is_configured": true, 00:16:24.581 "data_offset": 0, 00:16:24.581 "data_size": 65536 00:16:24.581 }, 00:16:24.581 { 00:16:24.581 "name": "BaseBdev4", 00:16:24.581 "uuid": "5c0093c6-18a2-49bb-a039-81e7e990c09e", 00:16:24.581 "is_configured": true, 00:16:24.581 "data_offset": 0, 00:16:24.581 "data_size": 65536 00:16:24.581 } 00:16:24.581 ] 00:16:24.581 }' 00:16:24.581 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:24.581 10:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.147 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:25.147 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:25.147 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.147 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:25.405 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:25.405 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:25.405 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:25.663 [2024-07-25 10:31:29.249911] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:25.663 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:25.663 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:25.663 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.663 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:25.920 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:25.920 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:25.920 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:26.177 [2024-07-25 10:31:29.784628] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:26.177 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:26.177 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:26.177 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.177 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:26.435 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:26.435 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:26.435 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:26.693 [2024-07-25 10:31:30.286344] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:26.693 [2024-07-25 10:31:30.286401] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1679cb0 name Existed_Raid, state offline 00:16:26.693 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:26.693 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:26.693 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.693 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:26.950 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:26.950 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:26.950 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:26.950 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:26.950 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:26.950 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:27.208 BaseBdev2 00:16:27.208 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:27.208 10:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:27.208 10:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.208 10:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:27.208 10:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.208 10:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.208 10:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:27.466 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:27.724 [ 00:16:27.724 { 00:16:27.724 "name": "BaseBdev2", 00:16:27.724 "aliases": [ 00:16:27.724 "4990d788-5ba8-4ba3-bd8e-0016237c28fd" 00:16:27.724 ], 00:16:27.724 "product_name": "Malloc disk", 00:16:27.724 "block_size": 512, 00:16:27.724 "num_blocks": 65536, 00:16:27.724 "uuid": "4990d788-5ba8-4ba3-bd8e-0016237c28fd", 00:16:27.724 "assigned_rate_limits": { 00:16:27.724 "rw_ios_per_sec": 0, 00:16:27.724 "rw_mbytes_per_sec": 0, 00:16:27.724 "r_mbytes_per_sec": 0, 00:16:27.724 "w_mbytes_per_sec": 0 00:16:27.724 }, 00:16:27.724 "claimed": false, 00:16:27.724 "zoned": false, 00:16:27.724 "supported_io_types": { 00:16:27.724 "read": true, 00:16:27.724 "write": true, 00:16:27.724 "unmap": true, 00:16:27.724 "flush": true, 00:16:27.724 "reset": true, 00:16:27.724 "nvme_admin": false, 00:16:27.724 "nvme_io": false, 00:16:27.724 "nvme_io_md": false, 00:16:27.724 "write_zeroes": true, 00:16:27.724 "zcopy": true, 00:16:27.724 "get_zone_info": false, 00:16:27.724 "zone_management": false, 00:16:27.724 "zone_append": false, 00:16:27.724 "compare": false, 00:16:27.724 "compare_and_write": false, 00:16:27.724 "abort": true, 00:16:27.724 "seek_hole": false, 00:16:27.724 "seek_data": false, 00:16:27.724 "copy": true, 00:16:27.724 "nvme_iov_md": false 00:16:27.724 }, 00:16:27.724 "memory_domains": [ 00:16:27.724 { 00:16:27.724 "dma_device_id": "system", 00:16:27.724 "dma_device_type": 1 00:16:27.724 }, 00:16:27.724 { 00:16:27.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.724 "dma_device_type": 2 00:16:27.724 } 00:16:27.724 ], 00:16:27.724 "driver_specific": {} 00:16:27.724 } 00:16:27.724 ] 00:16:27.724 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:27.724 10:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:27.724 10:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:27.724 10:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:27.981 BaseBdev3 00:16:27.982 10:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:27.982 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:27.982 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.982 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:27.982 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.982 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.982 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:28.239 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:28.497 [ 00:16:28.497 { 00:16:28.497 "name": "BaseBdev3", 00:16:28.497 "aliases": [ 00:16:28.497 "69c2b5ac-e8aa-42ad-b5d0-085751ac8551" 00:16:28.497 ], 00:16:28.497 "product_name": "Malloc disk", 00:16:28.497 "block_size": 512, 00:16:28.497 "num_blocks": 65536, 00:16:28.497 "uuid": "69c2b5ac-e8aa-42ad-b5d0-085751ac8551", 00:16:28.497 "assigned_rate_limits": { 00:16:28.497 "rw_ios_per_sec": 0, 00:16:28.497 "rw_mbytes_per_sec": 0, 00:16:28.497 "r_mbytes_per_sec": 0, 00:16:28.497 "w_mbytes_per_sec": 0 00:16:28.497 }, 00:16:28.497 "claimed": false, 00:16:28.497 "zoned": false, 00:16:28.497 "supported_io_types": { 00:16:28.497 "read": true, 00:16:28.497 "write": true, 00:16:28.497 "unmap": true, 00:16:28.497 "flush": true, 00:16:28.497 "reset": true, 00:16:28.497 "nvme_admin": false, 00:16:28.497 "nvme_io": false, 00:16:28.498 "nvme_io_md": false, 00:16:28.498 "write_zeroes": true, 00:16:28.498 "zcopy": true, 00:16:28.498 "get_zone_info": false, 00:16:28.498 "zone_management": false, 00:16:28.498 "zone_append": false, 00:16:28.498 "compare": false, 00:16:28.498 "compare_and_write": false, 00:16:28.498 "abort": true, 00:16:28.498 "seek_hole": false, 00:16:28.498 "seek_data": false, 00:16:28.498 "copy": true, 00:16:28.498 "nvme_iov_md": false 00:16:28.498 }, 00:16:28.498 "memory_domains": [ 00:16:28.498 { 00:16:28.498 "dma_device_id": "system", 00:16:28.498 "dma_device_type": 1 00:16:28.498 }, 00:16:28.498 { 00:16:28.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.498 "dma_device_type": 2 00:16:28.498 } 00:16:28.498 ], 00:16:28.498 "driver_specific": {} 00:16:28.498 } 00:16:28.498 ] 00:16:28.498 10:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:28.498 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:28.498 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:28.498 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:28.756 BaseBdev4 00:16:28.756 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:28.756 10:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:28.756 10:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:28.756 10:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:28.756 10:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:28.756 10:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:28.756 10:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.014 10:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:29.272 [ 00:16:29.272 { 00:16:29.272 "name": "BaseBdev4", 00:16:29.272 "aliases": [ 00:16:29.272 "42e7c064-1f6b-4d63-bf8b-3310db3948ce" 00:16:29.272 ], 00:16:29.272 "product_name": "Malloc disk", 00:16:29.272 "block_size": 512, 00:16:29.272 "num_blocks": 65536, 00:16:29.272 "uuid": "42e7c064-1f6b-4d63-bf8b-3310db3948ce", 00:16:29.272 "assigned_rate_limits": { 00:16:29.272 "rw_ios_per_sec": 0, 00:16:29.272 "rw_mbytes_per_sec": 0, 00:16:29.272 "r_mbytes_per_sec": 0, 00:16:29.272 "w_mbytes_per_sec": 0 00:16:29.272 }, 00:16:29.272 "claimed": false, 00:16:29.272 "zoned": false, 00:16:29.272 "supported_io_types": { 00:16:29.272 "read": true, 00:16:29.272 "write": true, 00:16:29.272 "unmap": true, 00:16:29.272 "flush": true, 00:16:29.272 "reset": true, 00:16:29.272 "nvme_admin": false, 00:16:29.272 "nvme_io": false, 00:16:29.272 "nvme_io_md": false, 00:16:29.272 "write_zeroes": true, 00:16:29.272 "zcopy": true, 00:16:29.272 "get_zone_info": false, 00:16:29.272 "zone_management": false, 00:16:29.272 "zone_append": false, 00:16:29.272 "compare": false, 00:16:29.272 "compare_and_write": false, 00:16:29.272 "abort": true, 00:16:29.272 "seek_hole": false, 00:16:29.272 "seek_data": false, 00:16:29.272 "copy": true, 00:16:29.272 "nvme_iov_md": false 00:16:29.272 }, 00:16:29.272 "memory_domains": [ 00:16:29.272 { 00:16:29.272 "dma_device_id": "system", 00:16:29.272 "dma_device_type": 1 00:16:29.272 }, 00:16:29.272 { 00:16:29.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.272 "dma_device_type": 2 00:16:29.272 } 00:16:29.272 ], 00:16:29.272 "driver_specific": {} 00:16:29.272 } 00:16:29.272 ] 00:16:29.272 10:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:29.272 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:29.272 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:29.272 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:29.529 [2024-07-25 10:31:33.110359] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.529 [2024-07-25 10:31:33.110398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.529 [2024-07-25 10:31:33.110440] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.529 [2024-07-25 10:31:33.111776] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.529 [2024-07-25 10:31:33.111827] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:29.529 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:29.529 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:29.529 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:29.529 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:29.530 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:29.530 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:29.530 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.530 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.530 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.530 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.530 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.530 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.787 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:29.787 "name": "Existed_Raid", 00:16:29.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.787 "strip_size_kb": 64, 00:16:29.787 "state": "configuring", 00:16:29.787 "raid_level": "raid0", 00:16:29.787 "superblock": false, 00:16:29.787 "num_base_bdevs": 4, 00:16:29.787 "num_base_bdevs_discovered": 3, 00:16:29.787 "num_base_bdevs_operational": 4, 00:16:29.787 "base_bdevs_list": [ 00:16:29.787 { 00:16:29.787 "name": "BaseBdev1", 00:16:29.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.787 "is_configured": false, 00:16:29.787 "data_offset": 0, 00:16:29.787 "data_size": 0 00:16:29.787 }, 00:16:29.787 { 00:16:29.787 "name": "BaseBdev2", 00:16:29.787 "uuid": "4990d788-5ba8-4ba3-bd8e-0016237c28fd", 00:16:29.787 "is_configured": true, 00:16:29.787 "data_offset": 0, 00:16:29.787 "data_size": 65536 00:16:29.787 }, 00:16:29.787 { 00:16:29.787 "name": "BaseBdev3", 00:16:29.787 "uuid": "69c2b5ac-e8aa-42ad-b5d0-085751ac8551", 00:16:29.787 "is_configured": true, 00:16:29.787 "data_offset": 0, 00:16:29.787 "data_size": 65536 00:16:29.787 }, 00:16:29.787 { 00:16:29.787 "name": "BaseBdev4", 00:16:29.787 "uuid": "42e7c064-1f6b-4d63-bf8b-3310db3948ce", 00:16:29.787 "is_configured": true, 00:16:29.787 "data_offset": 0, 00:16:29.787 "data_size": 65536 00:16:29.787 } 00:16:29.787 ] 00:16:29.787 }' 00:16:29.787 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:29.787 10:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.352 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:30.609 [2024-07-25 10:31:34.205290] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:30.609 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:30.609 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:30.609 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:30.609 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:30.609 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:30.609 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:30.609 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.609 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.609 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.609 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.609 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.609 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.867 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.867 "name": "Existed_Raid", 00:16:30.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.867 "strip_size_kb": 64, 00:16:30.867 "state": "configuring", 00:16:30.867 "raid_level": "raid0", 00:16:30.867 "superblock": false, 00:16:30.867 "num_base_bdevs": 4, 00:16:30.867 "num_base_bdevs_discovered": 2, 00:16:30.867 "num_base_bdevs_operational": 4, 00:16:30.867 "base_bdevs_list": [ 00:16:30.867 { 00:16:30.867 "name": "BaseBdev1", 00:16:30.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.867 "is_configured": false, 00:16:30.867 "data_offset": 0, 00:16:30.867 "data_size": 0 00:16:30.867 }, 00:16:30.867 { 00:16:30.867 "name": null, 00:16:30.867 "uuid": "4990d788-5ba8-4ba3-bd8e-0016237c28fd", 00:16:30.867 "is_configured": false, 00:16:30.867 "data_offset": 0, 00:16:30.867 "data_size": 65536 00:16:30.867 }, 00:16:30.867 { 00:16:30.867 "name": "BaseBdev3", 00:16:30.867 "uuid": "69c2b5ac-e8aa-42ad-b5d0-085751ac8551", 00:16:30.867 "is_configured": true, 00:16:30.867 "data_offset": 0, 00:16:30.867 "data_size": 65536 00:16:30.867 }, 00:16:30.867 { 00:16:30.867 "name": "BaseBdev4", 00:16:30.867 "uuid": "42e7c064-1f6b-4d63-bf8b-3310db3948ce", 00:16:30.867 "is_configured": true, 00:16:30.867 "data_offset": 0, 00:16:30.867 "data_size": 65536 00:16:30.867 } 00:16:30.867 ] 00:16:30.867 }' 00:16:30.867 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.867 10:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.432 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.432 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:31.690 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:31.690 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:31.947 [2024-07-25 10:31:35.505526] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.947 BaseBdev1 00:16:31.947 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:31.947 10:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:31.947 10:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:31.947 10:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:31.947 10:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:31.947 10:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:31.947 10:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:32.204 10:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:32.461 [ 00:16:32.461 { 00:16:32.461 "name": "BaseBdev1", 00:16:32.461 "aliases": [ 00:16:32.461 "108ec6b1-93bd-48ac-a310-cc625c932412" 00:16:32.461 ], 00:16:32.461 "product_name": "Malloc disk", 00:16:32.461 "block_size": 512, 00:16:32.461 "num_blocks": 65536, 00:16:32.461 "uuid": "108ec6b1-93bd-48ac-a310-cc625c932412", 00:16:32.461 "assigned_rate_limits": { 00:16:32.461 "rw_ios_per_sec": 0, 00:16:32.461 "rw_mbytes_per_sec": 0, 00:16:32.461 "r_mbytes_per_sec": 0, 00:16:32.461 "w_mbytes_per_sec": 0 00:16:32.461 }, 00:16:32.461 "claimed": true, 00:16:32.461 "claim_type": "exclusive_write", 00:16:32.461 "zoned": false, 00:16:32.461 "supported_io_types": { 00:16:32.461 "read": true, 00:16:32.461 "write": true, 00:16:32.461 "unmap": true, 00:16:32.461 "flush": true, 00:16:32.461 "reset": true, 00:16:32.461 "nvme_admin": false, 00:16:32.461 "nvme_io": false, 00:16:32.461 "nvme_io_md": false, 00:16:32.461 "write_zeroes": true, 00:16:32.461 "zcopy": true, 00:16:32.461 "get_zone_info": false, 00:16:32.461 "zone_management": false, 00:16:32.461 "zone_append": false, 00:16:32.461 "compare": false, 00:16:32.461 "compare_and_write": false, 00:16:32.461 "abort": true, 00:16:32.461 "seek_hole": false, 00:16:32.461 "seek_data": false, 00:16:32.461 "copy": true, 00:16:32.461 "nvme_iov_md": false 00:16:32.461 }, 00:16:32.461 "memory_domains": [ 00:16:32.461 { 00:16:32.461 "dma_device_id": "system", 00:16:32.461 "dma_device_type": 1 00:16:32.461 }, 00:16:32.461 { 00:16:32.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.461 "dma_device_type": 2 00:16:32.461 } 00:16:32.461 ], 00:16:32.461 "driver_specific": {} 00:16:32.461 } 00:16:32.461 ] 00:16:32.461 10:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:32.461 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:32.461 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:32.461 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:32.461 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:32.461 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:32.461 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:32.461 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.461 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.461 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.461 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.461 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.461 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.719 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.719 "name": "Existed_Raid", 00:16:32.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.719 "strip_size_kb": 64, 00:16:32.719 "state": "configuring", 00:16:32.719 "raid_level": "raid0", 00:16:32.719 "superblock": false, 00:16:32.719 "num_base_bdevs": 4, 00:16:32.719 "num_base_bdevs_discovered": 3, 00:16:32.719 "num_base_bdevs_operational": 4, 00:16:32.719 "base_bdevs_list": [ 00:16:32.719 { 00:16:32.719 "name": "BaseBdev1", 00:16:32.719 "uuid": "108ec6b1-93bd-48ac-a310-cc625c932412", 00:16:32.719 "is_configured": true, 00:16:32.719 "data_offset": 0, 00:16:32.719 "data_size": 65536 00:16:32.719 }, 00:16:32.719 { 00:16:32.719 "name": null, 00:16:32.719 "uuid": "4990d788-5ba8-4ba3-bd8e-0016237c28fd", 00:16:32.719 "is_configured": false, 00:16:32.719 "data_offset": 0, 00:16:32.719 "data_size": 65536 00:16:32.719 }, 00:16:32.719 { 00:16:32.719 "name": "BaseBdev3", 00:16:32.719 "uuid": "69c2b5ac-e8aa-42ad-b5d0-085751ac8551", 00:16:32.719 "is_configured": true, 00:16:32.719 "data_offset": 0, 00:16:32.719 "data_size": 65536 00:16:32.719 }, 00:16:32.719 { 00:16:32.719 "name": "BaseBdev4", 00:16:32.719 "uuid": "42e7c064-1f6b-4d63-bf8b-3310db3948ce", 00:16:32.719 "is_configured": true, 00:16:32.719 "data_offset": 0, 00:16:32.719 "data_size": 65536 00:16:32.719 } 00:16:32.719 ] 00:16:32.719 }' 00:16:32.719 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.719 10:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.283 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.283 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:33.541 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:33.541 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:33.799 [2024-07-25 10:31:37.266240] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:33.799 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:33.799 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.799 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:33.799 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:33.799 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:33.799 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:33.799 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.799 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.799 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.799 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.799 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.799 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.057 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.057 "name": "Existed_Raid", 00:16:34.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.057 "strip_size_kb": 64, 00:16:34.057 "state": "configuring", 00:16:34.057 "raid_level": "raid0", 00:16:34.057 "superblock": false, 00:16:34.057 "num_base_bdevs": 4, 00:16:34.057 "num_base_bdevs_discovered": 2, 00:16:34.057 "num_base_bdevs_operational": 4, 00:16:34.057 "base_bdevs_list": [ 00:16:34.057 { 00:16:34.057 "name": "BaseBdev1", 00:16:34.057 "uuid": "108ec6b1-93bd-48ac-a310-cc625c932412", 00:16:34.057 "is_configured": true, 00:16:34.057 "data_offset": 0, 00:16:34.057 "data_size": 65536 00:16:34.057 }, 00:16:34.057 { 00:16:34.057 "name": null, 00:16:34.057 "uuid": "4990d788-5ba8-4ba3-bd8e-0016237c28fd", 00:16:34.057 "is_configured": false, 00:16:34.057 "data_offset": 0, 00:16:34.057 "data_size": 65536 00:16:34.057 }, 00:16:34.057 { 00:16:34.057 "name": null, 00:16:34.057 "uuid": "69c2b5ac-e8aa-42ad-b5d0-085751ac8551", 00:16:34.057 "is_configured": false, 00:16:34.057 "data_offset": 0, 00:16:34.057 "data_size": 65536 00:16:34.057 }, 00:16:34.057 { 00:16:34.057 "name": "BaseBdev4", 00:16:34.057 "uuid": "42e7c064-1f6b-4d63-bf8b-3310db3948ce", 00:16:34.057 "is_configured": true, 00:16:34.057 "data_offset": 0, 00:16:34.057 "data_size": 65536 00:16:34.057 } 00:16:34.057 ] 00:16:34.057 }' 00:16:34.057 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.057 10:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.672 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.672 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:34.672 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:34.672 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:34.930 [2024-07-25 10:31:38.545639] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:34.930 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:34.930 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:34.930 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:34.930 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:34.930 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:34.930 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:34.930 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.930 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.930 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.930 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.930 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.930 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.188 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.188 "name": "Existed_Raid", 00:16:35.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.188 "strip_size_kb": 64, 00:16:35.188 "state": "configuring", 00:16:35.188 "raid_level": "raid0", 00:16:35.188 "superblock": false, 00:16:35.188 "num_base_bdevs": 4, 00:16:35.188 "num_base_bdevs_discovered": 3, 00:16:35.188 "num_base_bdevs_operational": 4, 00:16:35.188 "base_bdevs_list": [ 00:16:35.188 { 00:16:35.188 "name": "BaseBdev1", 00:16:35.188 "uuid": "108ec6b1-93bd-48ac-a310-cc625c932412", 00:16:35.188 "is_configured": true, 00:16:35.188 "data_offset": 0, 00:16:35.188 "data_size": 65536 00:16:35.188 }, 00:16:35.188 { 00:16:35.188 "name": null, 00:16:35.188 "uuid": "4990d788-5ba8-4ba3-bd8e-0016237c28fd", 00:16:35.188 "is_configured": false, 00:16:35.188 "data_offset": 0, 00:16:35.188 "data_size": 65536 00:16:35.188 }, 00:16:35.188 { 00:16:35.188 "name": "BaseBdev3", 00:16:35.188 "uuid": "69c2b5ac-e8aa-42ad-b5d0-085751ac8551", 00:16:35.188 "is_configured": true, 00:16:35.188 "data_offset": 0, 00:16:35.188 "data_size": 65536 00:16:35.188 }, 00:16:35.188 { 00:16:35.188 "name": "BaseBdev4", 00:16:35.188 "uuid": "42e7c064-1f6b-4d63-bf8b-3310db3948ce", 00:16:35.188 "is_configured": true, 00:16:35.188 "data_offset": 0, 00:16:35.188 "data_size": 65536 00:16:35.188 } 00:16:35.188 ] 00:16:35.188 }' 00:16:35.188 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.188 10:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.753 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.753 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:36.013 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:36.013 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:36.271 [2024-07-25 10:31:39.776932] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:36.271 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:36.271 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:36.271 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:36.271 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:36.271 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:36.271 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:36.271 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.271 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.271 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.271 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.271 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.271 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.528 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.528 "name": "Existed_Raid", 00:16:36.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.528 "strip_size_kb": 64, 00:16:36.528 "state": "configuring", 00:16:36.528 "raid_level": "raid0", 00:16:36.528 "superblock": false, 00:16:36.528 "num_base_bdevs": 4, 00:16:36.528 "num_base_bdevs_discovered": 2, 00:16:36.528 "num_base_bdevs_operational": 4, 00:16:36.528 "base_bdevs_list": [ 00:16:36.528 { 00:16:36.528 "name": null, 00:16:36.528 "uuid": "108ec6b1-93bd-48ac-a310-cc625c932412", 00:16:36.528 "is_configured": false, 00:16:36.528 "data_offset": 0, 00:16:36.528 "data_size": 65536 00:16:36.528 }, 00:16:36.528 { 00:16:36.528 "name": null, 00:16:36.528 "uuid": "4990d788-5ba8-4ba3-bd8e-0016237c28fd", 00:16:36.528 "is_configured": false, 00:16:36.528 "data_offset": 0, 00:16:36.528 "data_size": 65536 00:16:36.528 }, 00:16:36.528 { 00:16:36.528 "name": "BaseBdev3", 00:16:36.528 "uuid": "69c2b5ac-e8aa-42ad-b5d0-085751ac8551", 00:16:36.528 "is_configured": true, 00:16:36.528 "data_offset": 0, 00:16:36.528 "data_size": 65536 00:16:36.528 }, 00:16:36.528 { 00:16:36.528 "name": "BaseBdev4", 00:16:36.528 "uuid": "42e7c064-1f6b-4d63-bf8b-3310db3948ce", 00:16:36.528 "is_configured": true, 00:16:36.528 "data_offset": 0, 00:16:36.528 "data_size": 65536 00:16:36.528 } 00:16:36.528 ] 00:16:36.528 }' 00:16:36.528 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.528 10:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.093 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.093 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:37.351 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:37.351 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:37.351 [2024-07-25 10:31:41.049413] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.609 "name": "Existed_Raid", 00:16:37.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.609 "strip_size_kb": 64, 00:16:37.609 "state": "configuring", 00:16:37.609 "raid_level": "raid0", 00:16:37.609 "superblock": false, 00:16:37.609 "num_base_bdevs": 4, 00:16:37.609 "num_base_bdevs_discovered": 3, 00:16:37.609 "num_base_bdevs_operational": 4, 00:16:37.609 "base_bdevs_list": [ 00:16:37.609 { 00:16:37.609 "name": null, 00:16:37.609 "uuid": "108ec6b1-93bd-48ac-a310-cc625c932412", 00:16:37.609 "is_configured": false, 00:16:37.609 "data_offset": 0, 00:16:37.609 "data_size": 65536 00:16:37.609 }, 00:16:37.609 { 00:16:37.609 "name": "BaseBdev2", 00:16:37.609 "uuid": "4990d788-5ba8-4ba3-bd8e-0016237c28fd", 00:16:37.609 "is_configured": true, 00:16:37.609 "data_offset": 0, 00:16:37.609 "data_size": 65536 00:16:37.609 }, 00:16:37.609 { 00:16:37.609 "name": "BaseBdev3", 00:16:37.609 "uuid": "69c2b5ac-e8aa-42ad-b5d0-085751ac8551", 00:16:37.609 "is_configured": true, 00:16:37.609 "data_offset": 0, 00:16:37.609 "data_size": 65536 00:16:37.609 }, 00:16:37.609 { 00:16:37.609 "name": "BaseBdev4", 00:16:37.609 "uuid": "42e7c064-1f6b-4d63-bf8b-3310db3948ce", 00:16:37.609 "is_configured": true, 00:16:37.609 "data_offset": 0, 00:16:37.609 "data_size": 65536 00:16:37.609 } 00:16:37.609 ] 00:16:37.609 }' 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.609 10:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.175 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.175 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:38.433 10:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:38.433 10:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.433 10:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:38.690 10:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 108ec6b1-93bd-48ac-a310-cc625c932412 00:16:38.948 [2024-07-25 10:31:42.606706] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:38.948 [2024-07-25 10:31:42.606750] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1822ac0 00:16:38.948 [2024-07-25 10:31:42.606758] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:38.948 [2024-07-25 10:31:42.606935] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1822f90 00:16:38.948 [2024-07-25 10:31:42.607055] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1822ac0 00:16:38.948 [2024-07-25 10:31:42.607075] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1822ac0 00:16:38.948 [2024-07-25 10:31:42.607303] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.948 NewBaseBdev 00:16:38.949 10:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:38.949 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:38.949 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:38.949 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:38.949 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:38.949 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:38.949 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:39.207 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:39.464 [ 00:16:39.464 { 00:16:39.464 "name": "NewBaseBdev", 00:16:39.464 "aliases": [ 00:16:39.464 "108ec6b1-93bd-48ac-a310-cc625c932412" 00:16:39.464 ], 00:16:39.464 "product_name": "Malloc disk", 00:16:39.464 "block_size": 512, 00:16:39.464 "num_blocks": 65536, 00:16:39.464 "uuid": "108ec6b1-93bd-48ac-a310-cc625c932412", 00:16:39.464 "assigned_rate_limits": { 00:16:39.464 "rw_ios_per_sec": 0, 00:16:39.464 "rw_mbytes_per_sec": 0, 00:16:39.464 "r_mbytes_per_sec": 0, 00:16:39.464 "w_mbytes_per_sec": 0 00:16:39.464 }, 00:16:39.464 "claimed": true, 00:16:39.464 "claim_type": "exclusive_write", 00:16:39.464 "zoned": false, 00:16:39.464 "supported_io_types": { 00:16:39.464 "read": true, 00:16:39.464 "write": true, 00:16:39.464 "unmap": true, 00:16:39.464 "flush": true, 00:16:39.464 "reset": true, 00:16:39.464 "nvme_admin": false, 00:16:39.464 "nvme_io": false, 00:16:39.464 "nvme_io_md": false, 00:16:39.464 "write_zeroes": true, 00:16:39.464 "zcopy": true, 00:16:39.464 "get_zone_info": false, 00:16:39.464 "zone_management": false, 00:16:39.464 "zone_append": false, 00:16:39.464 "compare": false, 00:16:39.464 "compare_and_write": false, 00:16:39.464 "abort": true, 00:16:39.464 "seek_hole": false, 00:16:39.464 "seek_data": false, 00:16:39.464 "copy": true, 00:16:39.464 "nvme_iov_md": false 00:16:39.464 }, 00:16:39.464 "memory_domains": [ 00:16:39.464 { 00:16:39.465 "dma_device_id": "system", 00:16:39.465 "dma_device_type": 1 00:16:39.465 }, 00:16:39.465 { 00:16:39.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.465 "dma_device_type": 2 00:16:39.465 } 00:16:39.465 ], 00:16:39.465 "driver_specific": {} 00:16:39.465 } 00:16:39.465 ] 00:16:39.465 10:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:39.465 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:39.465 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:39.465 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:39.465 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:39.465 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:39.465 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:39.465 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.465 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.465 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.465 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.465 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.465 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.721 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.721 "name": "Existed_Raid", 00:16:39.721 "uuid": "6b5e1b7d-b8a5-44fd-afff-69ac1b911c1f", 00:16:39.721 "strip_size_kb": 64, 00:16:39.721 "state": "online", 00:16:39.721 "raid_level": "raid0", 00:16:39.721 "superblock": false, 00:16:39.721 "num_base_bdevs": 4, 00:16:39.721 "num_base_bdevs_discovered": 4, 00:16:39.721 "num_base_bdevs_operational": 4, 00:16:39.721 "base_bdevs_list": [ 00:16:39.721 { 00:16:39.721 "name": "NewBaseBdev", 00:16:39.721 "uuid": "108ec6b1-93bd-48ac-a310-cc625c932412", 00:16:39.721 "is_configured": true, 00:16:39.721 "data_offset": 0, 00:16:39.721 "data_size": 65536 00:16:39.721 }, 00:16:39.721 { 00:16:39.721 "name": "BaseBdev2", 00:16:39.721 "uuid": "4990d788-5ba8-4ba3-bd8e-0016237c28fd", 00:16:39.721 "is_configured": true, 00:16:39.721 "data_offset": 0, 00:16:39.721 "data_size": 65536 00:16:39.721 }, 00:16:39.721 { 00:16:39.721 "name": "BaseBdev3", 00:16:39.721 "uuid": "69c2b5ac-e8aa-42ad-b5d0-085751ac8551", 00:16:39.721 "is_configured": true, 00:16:39.721 "data_offset": 0, 00:16:39.721 "data_size": 65536 00:16:39.721 }, 00:16:39.721 { 00:16:39.721 "name": "BaseBdev4", 00:16:39.721 "uuid": "42e7c064-1f6b-4d63-bf8b-3310db3948ce", 00:16:39.721 "is_configured": true, 00:16:39.721 "data_offset": 0, 00:16:39.721 "data_size": 65536 00:16:39.721 } 00:16:39.721 ] 00:16:39.721 }' 00:16:39.721 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.721 10:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.286 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:40.286 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:40.286 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:40.286 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:40.286 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:40.286 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:40.286 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:40.286 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:40.544 [2024-07-25 10:31:44.103066] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.544 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:40.544 "name": "Existed_Raid", 00:16:40.544 "aliases": [ 00:16:40.544 "6b5e1b7d-b8a5-44fd-afff-69ac1b911c1f" 00:16:40.544 ], 00:16:40.544 "product_name": "Raid Volume", 00:16:40.544 "block_size": 512, 00:16:40.544 "num_blocks": 262144, 00:16:40.544 "uuid": "6b5e1b7d-b8a5-44fd-afff-69ac1b911c1f", 00:16:40.544 "assigned_rate_limits": { 00:16:40.544 "rw_ios_per_sec": 0, 00:16:40.544 "rw_mbytes_per_sec": 0, 00:16:40.544 "r_mbytes_per_sec": 0, 00:16:40.544 "w_mbytes_per_sec": 0 00:16:40.544 }, 00:16:40.544 "claimed": false, 00:16:40.544 "zoned": false, 00:16:40.544 "supported_io_types": { 00:16:40.544 "read": true, 00:16:40.544 "write": true, 00:16:40.544 "unmap": true, 00:16:40.544 "flush": true, 00:16:40.544 "reset": true, 00:16:40.544 "nvme_admin": false, 00:16:40.544 "nvme_io": false, 00:16:40.544 "nvme_io_md": false, 00:16:40.544 "write_zeroes": true, 00:16:40.544 "zcopy": false, 00:16:40.544 "get_zone_info": false, 00:16:40.544 "zone_management": false, 00:16:40.544 "zone_append": false, 00:16:40.544 "compare": false, 00:16:40.544 "compare_and_write": false, 00:16:40.544 "abort": false, 00:16:40.544 "seek_hole": false, 00:16:40.544 "seek_data": false, 00:16:40.544 "copy": false, 00:16:40.544 "nvme_iov_md": false 00:16:40.544 }, 00:16:40.544 "memory_domains": [ 00:16:40.544 { 00:16:40.544 "dma_device_id": "system", 00:16:40.544 "dma_device_type": 1 00:16:40.544 }, 00:16:40.544 { 00:16:40.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.544 "dma_device_type": 2 00:16:40.544 }, 00:16:40.544 { 00:16:40.544 "dma_device_id": "system", 00:16:40.544 "dma_device_type": 1 00:16:40.544 }, 00:16:40.544 { 00:16:40.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.544 "dma_device_type": 2 00:16:40.544 }, 00:16:40.544 { 00:16:40.544 "dma_device_id": "system", 00:16:40.544 "dma_device_type": 1 00:16:40.544 }, 00:16:40.544 { 00:16:40.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.544 "dma_device_type": 2 00:16:40.544 }, 00:16:40.544 { 00:16:40.544 "dma_device_id": "system", 00:16:40.544 "dma_device_type": 1 00:16:40.544 }, 00:16:40.544 { 00:16:40.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.544 "dma_device_type": 2 00:16:40.544 } 00:16:40.544 ], 00:16:40.544 "driver_specific": { 00:16:40.544 "raid": { 00:16:40.544 "uuid": "6b5e1b7d-b8a5-44fd-afff-69ac1b911c1f", 00:16:40.544 "strip_size_kb": 64, 00:16:40.544 "state": "online", 00:16:40.544 "raid_level": "raid0", 00:16:40.544 "superblock": false, 00:16:40.544 "num_base_bdevs": 4, 00:16:40.544 "num_base_bdevs_discovered": 4, 00:16:40.544 "num_base_bdevs_operational": 4, 00:16:40.544 "base_bdevs_list": [ 00:16:40.544 { 00:16:40.544 "name": "NewBaseBdev", 00:16:40.544 "uuid": "108ec6b1-93bd-48ac-a310-cc625c932412", 00:16:40.544 "is_configured": true, 00:16:40.544 "data_offset": 0, 00:16:40.544 "data_size": 65536 00:16:40.544 }, 00:16:40.544 { 00:16:40.544 "name": "BaseBdev2", 00:16:40.544 "uuid": "4990d788-5ba8-4ba3-bd8e-0016237c28fd", 00:16:40.544 "is_configured": true, 00:16:40.544 "data_offset": 0, 00:16:40.544 "data_size": 65536 00:16:40.544 }, 00:16:40.544 { 00:16:40.544 "name": "BaseBdev3", 00:16:40.544 "uuid": "69c2b5ac-e8aa-42ad-b5d0-085751ac8551", 00:16:40.544 "is_configured": true, 00:16:40.544 "data_offset": 0, 00:16:40.544 "data_size": 65536 00:16:40.544 }, 00:16:40.544 { 00:16:40.544 "name": "BaseBdev4", 00:16:40.544 "uuid": "42e7c064-1f6b-4d63-bf8b-3310db3948ce", 00:16:40.544 "is_configured": true, 00:16:40.544 "data_offset": 0, 00:16:40.544 "data_size": 65536 00:16:40.544 } 00:16:40.544 ] 00:16:40.544 } 00:16:40.544 } 00:16:40.544 }' 00:16:40.544 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:40.544 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:40.544 BaseBdev2 00:16:40.544 BaseBdev3 00:16:40.544 BaseBdev4' 00:16:40.544 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:40.544 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:40.544 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:40.802 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:40.802 "name": "NewBaseBdev", 00:16:40.802 "aliases": [ 00:16:40.802 "108ec6b1-93bd-48ac-a310-cc625c932412" 00:16:40.802 ], 00:16:40.802 "product_name": "Malloc disk", 00:16:40.802 "block_size": 512, 00:16:40.802 "num_blocks": 65536, 00:16:40.802 "uuid": "108ec6b1-93bd-48ac-a310-cc625c932412", 00:16:40.802 "assigned_rate_limits": { 00:16:40.802 "rw_ios_per_sec": 0, 00:16:40.802 "rw_mbytes_per_sec": 0, 00:16:40.802 "r_mbytes_per_sec": 0, 00:16:40.802 "w_mbytes_per_sec": 0 00:16:40.802 }, 00:16:40.802 "claimed": true, 00:16:40.802 "claim_type": "exclusive_write", 00:16:40.802 "zoned": false, 00:16:40.802 "supported_io_types": { 00:16:40.803 "read": true, 00:16:40.803 "write": true, 00:16:40.803 "unmap": true, 00:16:40.803 "flush": true, 00:16:40.803 "reset": true, 00:16:40.803 "nvme_admin": false, 00:16:40.803 "nvme_io": false, 00:16:40.803 "nvme_io_md": false, 00:16:40.803 "write_zeroes": true, 00:16:40.803 "zcopy": true, 00:16:40.803 "get_zone_info": false, 00:16:40.803 "zone_management": false, 00:16:40.803 "zone_append": false, 00:16:40.803 "compare": false, 00:16:40.803 "compare_and_write": false, 00:16:40.803 "abort": true, 00:16:40.803 "seek_hole": false, 00:16:40.803 "seek_data": false, 00:16:40.803 "copy": true, 00:16:40.803 "nvme_iov_md": false 00:16:40.803 }, 00:16:40.803 "memory_domains": [ 00:16:40.803 { 00:16:40.803 "dma_device_id": "system", 00:16:40.803 "dma_device_type": 1 00:16:40.803 }, 00:16:40.803 { 00:16:40.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.803 "dma_device_type": 2 00:16:40.803 } 00:16:40.803 ], 00:16:40.803 "driver_specific": {} 00:16:40.803 }' 00:16:40.803 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:40.803 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:40.803 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:40.803 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:41.061 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:41.061 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:41.061 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:41.061 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:41.061 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:41.061 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:41.061 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:41.061 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:41.061 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:41.061 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:41.061 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:41.319 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:41.319 "name": "BaseBdev2", 00:16:41.319 "aliases": [ 00:16:41.319 "4990d788-5ba8-4ba3-bd8e-0016237c28fd" 00:16:41.319 ], 00:16:41.319 "product_name": "Malloc disk", 00:16:41.319 "block_size": 512, 00:16:41.319 "num_blocks": 65536, 00:16:41.319 "uuid": "4990d788-5ba8-4ba3-bd8e-0016237c28fd", 00:16:41.319 "assigned_rate_limits": { 00:16:41.319 "rw_ios_per_sec": 0, 00:16:41.319 "rw_mbytes_per_sec": 0, 00:16:41.319 "r_mbytes_per_sec": 0, 00:16:41.319 "w_mbytes_per_sec": 0 00:16:41.319 }, 00:16:41.319 "claimed": true, 00:16:41.319 "claim_type": "exclusive_write", 00:16:41.319 "zoned": false, 00:16:41.319 "supported_io_types": { 00:16:41.319 "read": true, 00:16:41.319 "write": true, 00:16:41.319 "unmap": true, 00:16:41.319 "flush": true, 00:16:41.319 "reset": true, 00:16:41.319 "nvme_admin": false, 00:16:41.319 "nvme_io": false, 00:16:41.319 "nvme_io_md": false, 00:16:41.319 "write_zeroes": true, 00:16:41.319 "zcopy": true, 00:16:41.319 "get_zone_info": false, 00:16:41.319 "zone_management": false, 00:16:41.319 "zone_append": false, 00:16:41.319 "compare": false, 00:16:41.319 "compare_and_write": false, 00:16:41.319 "abort": true, 00:16:41.319 "seek_hole": false, 00:16:41.319 "seek_data": false, 00:16:41.319 "copy": true, 00:16:41.319 "nvme_iov_md": false 00:16:41.319 }, 00:16:41.319 "memory_domains": [ 00:16:41.319 { 00:16:41.319 "dma_device_id": "system", 00:16:41.319 "dma_device_type": 1 00:16:41.319 }, 00:16:41.319 { 00:16:41.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.320 "dma_device_type": 2 00:16:41.320 } 00:16:41.320 ], 00:16:41.320 "driver_specific": {} 00:16:41.320 }' 00:16:41.320 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:41.320 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:41.577 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:41.577 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:41.577 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:41.577 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:41.577 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:41.577 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:41.577 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:41.577 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:41.577 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:41.577 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:41.577 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:41.577 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:41.577 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:41.835 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:41.835 "name": "BaseBdev3", 00:16:41.835 "aliases": [ 00:16:41.835 "69c2b5ac-e8aa-42ad-b5d0-085751ac8551" 00:16:41.835 ], 00:16:41.835 "product_name": "Malloc disk", 00:16:41.835 "block_size": 512, 00:16:41.835 "num_blocks": 65536, 00:16:41.835 "uuid": "69c2b5ac-e8aa-42ad-b5d0-085751ac8551", 00:16:41.835 "assigned_rate_limits": { 00:16:41.835 "rw_ios_per_sec": 0, 00:16:41.835 "rw_mbytes_per_sec": 0, 00:16:41.835 "r_mbytes_per_sec": 0, 00:16:41.835 "w_mbytes_per_sec": 0 00:16:41.835 }, 00:16:41.835 "claimed": true, 00:16:41.835 "claim_type": "exclusive_write", 00:16:41.835 "zoned": false, 00:16:41.835 "supported_io_types": { 00:16:41.835 "read": true, 00:16:41.835 "write": true, 00:16:41.835 "unmap": true, 00:16:41.835 "flush": true, 00:16:41.835 "reset": true, 00:16:41.835 "nvme_admin": false, 00:16:41.835 "nvme_io": false, 00:16:41.835 "nvme_io_md": false, 00:16:41.835 "write_zeroes": true, 00:16:41.835 "zcopy": true, 00:16:41.835 "get_zone_info": false, 00:16:41.835 "zone_management": false, 00:16:41.835 "zone_append": false, 00:16:41.835 "compare": false, 00:16:41.835 "compare_and_write": false, 00:16:41.835 "abort": true, 00:16:41.835 "seek_hole": false, 00:16:41.835 "seek_data": false, 00:16:41.835 "copy": true, 00:16:41.835 "nvme_iov_md": false 00:16:41.835 }, 00:16:41.835 "memory_domains": [ 00:16:41.835 { 00:16:41.835 "dma_device_id": "system", 00:16:41.835 "dma_device_type": 1 00:16:41.835 }, 00:16:41.835 { 00:16:41.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.835 "dma_device_type": 2 00:16:41.835 } 00:16:41.835 ], 00:16:41.835 "driver_specific": {} 00:16:41.835 }' 00:16:41.835 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:41.835 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:42.093 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:42.093 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:42.093 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:42.093 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:42.093 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:42.093 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:42.093 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:42.093 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:42.093 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:42.093 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:42.093 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:42.093 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:42.093 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:42.351 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:42.351 "name": "BaseBdev4", 00:16:42.351 "aliases": [ 00:16:42.351 "42e7c064-1f6b-4d63-bf8b-3310db3948ce" 00:16:42.351 ], 00:16:42.351 "product_name": "Malloc disk", 00:16:42.351 "block_size": 512, 00:16:42.351 "num_blocks": 65536, 00:16:42.351 "uuid": "42e7c064-1f6b-4d63-bf8b-3310db3948ce", 00:16:42.351 "assigned_rate_limits": { 00:16:42.351 "rw_ios_per_sec": 0, 00:16:42.351 "rw_mbytes_per_sec": 0, 00:16:42.351 "r_mbytes_per_sec": 0, 00:16:42.351 "w_mbytes_per_sec": 0 00:16:42.351 }, 00:16:42.351 "claimed": true, 00:16:42.351 "claim_type": "exclusive_write", 00:16:42.351 "zoned": false, 00:16:42.351 "supported_io_types": { 00:16:42.351 "read": true, 00:16:42.351 "write": true, 00:16:42.351 "unmap": true, 00:16:42.351 "flush": true, 00:16:42.351 "reset": true, 00:16:42.351 "nvme_admin": false, 00:16:42.351 "nvme_io": false, 00:16:42.351 "nvme_io_md": false, 00:16:42.351 "write_zeroes": true, 00:16:42.351 "zcopy": true, 00:16:42.351 "get_zone_info": false, 00:16:42.351 "zone_management": false, 00:16:42.351 "zone_append": false, 00:16:42.351 "compare": false, 00:16:42.351 "compare_and_write": false, 00:16:42.351 "abort": true, 00:16:42.351 "seek_hole": false, 00:16:42.351 "seek_data": false, 00:16:42.351 "copy": true, 00:16:42.351 "nvme_iov_md": false 00:16:42.351 }, 00:16:42.351 "memory_domains": [ 00:16:42.351 { 00:16:42.351 "dma_device_id": "system", 00:16:42.351 "dma_device_type": 1 00:16:42.351 }, 00:16:42.351 { 00:16:42.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.351 "dma_device_type": 2 00:16:42.351 } 00:16:42.351 ], 00:16:42.351 "driver_specific": {} 00:16:42.351 }' 00:16:42.351 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:42.351 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:42.609 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:42.609 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:42.609 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:42.609 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:42.609 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:42.609 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:42.609 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:42.609 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:42.609 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:42.609 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:42.609 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:42.867 [2024-07-25 10:31:46.497164] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.867 [2024-07-25 10:31:46.497194] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.867 [2024-07-25 10:31:46.497283] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.867 [2024-07-25 10:31:46.497354] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.867 [2024-07-25 10:31:46.497367] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1822ac0 name Existed_Raid, state offline 00:16:42.867 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2388514 00:16:42.867 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2388514 ']' 00:16:42.867 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2388514 00:16:42.867 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:42.867 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.867 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2388514 00:16:42.867 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:42.867 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:42.867 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2388514' 00:16:42.867 killing process with pid 2388514 00:16:42.867 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2388514 00:16:42.867 [2024-07-25 10:31:46.543428] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.867 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2388514 00:16:43.125 [2024-07-25 10:31:46.592796] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:43.384 00:16:43.384 real 0m31.614s 00:16:43.384 user 0m59.039s 00:16:43.384 sys 0m4.296s 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.384 ************************************ 00:16:43.384 END TEST raid_state_function_test 00:16:43.384 ************************************ 00:16:43.384 10:31:46 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:16:43.384 10:31:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:43.384 10:31:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.384 10:31:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.384 ************************************ 00:16:43.384 START TEST raid_state_function_test_sb 00:16:43.384 ************************************ 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2392930 00:16:43.384 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:43.385 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2392930' 00:16:43.385 Process raid pid: 2392930 00:16:43.385 10:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2392930 /var/tmp/spdk-raid.sock 00:16:43.385 10:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2392930 ']' 00:16:43.385 10:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:43.385 10:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:43.385 10:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:43.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:43.385 10:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:43.385 10:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.385 [2024-07-25 10:31:46.945758] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:16:43.385 [2024-07-25 10:31:46.945827] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.385 [2024-07-25 10:31:47.022800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.643 [2024-07-25 10:31:47.136144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.643 [2024-07-25 10:31:47.206928] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.643 [2024-07-25 10:31:47.206972] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.576 10:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:44.576 10:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:44.576 10:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:44.576 [2024-07-25 10:31:48.188063] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:44.576 [2024-07-25 10:31:48.188118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:44.576 [2024-07-25 10:31:48.188133] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:44.576 [2024-07-25 10:31:48.188146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:44.576 [2024-07-25 10:31:48.188156] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:44.576 [2024-07-25 10:31:48.188168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:44.576 [2024-07-25 10:31:48.188177] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:44.576 [2024-07-25 10:31:48.188190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:44.576 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:44.576 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:44.576 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:44.576 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:44.576 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:44.576 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:44.576 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.576 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.576 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.576 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.576 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.576 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.834 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.834 "name": "Existed_Raid", 00:16:44.834 "uuid": "f0ebdb41-55aa-4494-8b2e-41aaa0f78944", 00:16:44.834 "strip_size_kb": 64, 00:16:44.834 "state": "configuring", 00:16:44.834 "raid_level": "raid0", 00:16:44.834 "superblock": true, 00:16:44.834 "num_base_bdevs": 4, 00:16:44.834 "num_base_bdevs_discovered": 0, 00:16:44.834 "num_base_bdevs_operational": 4, 00:16:44.834 "base_bdevs_list": [ 00:16:44.834 { 00:16:44.834 "name": "BaseBdev1", 00:16:44.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.834 "is_configured": false, 00:16:44.834 "data_offset": 0, 00:16:44.834 "data_size": 0 00:16:44.834 }, 00:16:44.834 { 00:16:44.834 "name": "BaseBdev2", 00:16:44.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.834 "is_configured": false, 00:16:44.834 "data_offset": 0, 00:16:44.834 "data_size": 0 00:16:44.834 }, 00:16:44.834 { 00:16:44.834 "name": "BaseBdev3", 00:16:44.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.834 "is_configured": false, 00:16:44.834 "data_offset": 0, 00:16:44.834 "data_size": 0 00:16:44.834 }, 00:16:44.834 { 00:16:44.834 "name": "BaseBdev4", 00:16:44.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.834 "is_configured": false, 00:16:44.834 "data_offset": 0, 00:16:44.834 "data_size": 0 00:16:44.834 } 00:16:44.834 ] 00:16:44.834 }' 00:16:44.834 10:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.834 10:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.400 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:45.658 [2024-07-25 10:31:49.226687] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:45.658 [2024-07-25 10:31:49.226722] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10de640 name Existed_Raid, state configuring 00:16:45.658 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:45.916 [2024-07-25 10:31:49.471358] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:45.916 [2024-07-25 10:31:49.471395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:45.916 [2024-07-25 10:31:49.471406] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.916 [2024-07-25 10:31:49.471419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.916 [2024-07-25 10:31:49.471428] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:45.916 [2024-07-25 10:31:49.471441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:45.916 [2024-07-25 10:31:49.471450] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:45.916 [2024-07-25 10:31:49.471462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:45.916 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:46.174 [2024-07-25 10:31:49.737305] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.174 BaseBdev1 00:16:46.174 10:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:46.174 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:46.174 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:46.174 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:46.174 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:46.174 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:46.174 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:46.432 10:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:46.689 [ 00:16:46.689 { 00:16:46.689 "name": "BaseBdev1", 00:16:46.689 "aliases": [ 00:16:46.689 "fa682f8f-05fc-4b73-ac37-79dc456fb587" 00:16:46.689 ], 00:16:46.689 "product_name": "Malloc disk", 00:16:46.689 "block_size": 512, 00:16:46.690 "num_blocks": 65536, 00:16:46.690 "uuid": "fa682f8f-05fc-4b73-ac37-79dc456fb587", 00:16:46.690 "assigned_rate_limits": { 00:16:46.690 "rw_ios_per_sec": 0, 00:16:46.690 "rw_mbytes_per_sec": 0, 00:16:46.690 "r_mbytes_per_sec": 0, 00:16:46.690 "w_mbytes_per_sec": 0 00:16:46.690 }, 00:16:46.690 "claimed": true, 00:16:46.690 "claim_type": "exclusive_write", 00:16:46.690 "zoned": false, 00:16:46.690 "supported_io_types": { 00:16:46.690 "read": true, 00:16:46.690 "write": true, 00:16:46.690 "unmap": true, 00:16:46.690 "flush": true, 00:16:46.690 "reset": true, 00:16:46.690 "nvme_admin": false, 00:16:46.690 "nvme_io": false, 00:16:46.690 "nvme_io_md": false, 00:16:46.690 "write_zeroes": true, 00:16:46.690 "zcopy": true, 00:16:46.690 "get_zone_info": false, 00:16:46.690 "zone_management": false, 00:16:46.690 "zone_append": false, 00:16:46.690 "compare": false, 00:16:46.690 "compare_and_write": false, 00:16:46.690 "abort": true, 00:16:46.690 "seek_hole": false, 00:16:46.690 "seek_data": false, 00:16:46.690 "copy": true, 00:16:46.690 "nvme_iov_md": false 00:16:46.690 }, 00:16:46.690 "memory_domains": [ 00:16:46.690 { 00:16:46.690 "dma_device_id": "system", 00:16:46.690 "dma_device_type": 1 00:16:46.690 }, 00:16:46.690 { 00:16:46.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.690 "dma_device_type": 2 00:16:46.690 } 00:16:46.690 ], 00:16:46.690 "driver_specific": {} 00:16:46.690 } 00:16:46.690 ] 00:16:46.690 10:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:46.690 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:46.690 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:46.690 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:46.690 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:46.690 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:46.690 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:46.690 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.690 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.690 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.690 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.690 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.690 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.948 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:46.948 "name": "Existed_Raid", 00:16:46.948 "uuid": "7c96e578-f6bc-41a1-a6b8-d2f668ec40a9", 00:16:46.948 "strip_size_kb": 64, 00:16:46.948 "state": "configuring", 00:16:46.948 "raid_level": "raid0", 00:16:46.948 "superblock": true, 00:16:46.948 "num_base_bdevs": 4, 00:16:46.948 "num_base_bdevs_discovered": 1, 00:16:46.948 "num_base_bdevs_operational": 4, 00:16:46.948 "base_bdevs_list": [ 00:16:46.948 { 00:16:46.948 "name": "BaseBdev1", 00:16:46.948 "uuid": "fa682f8f-05fc-4b73-ac37-79dc456fb587", 00:16:46.948 "is_configured": true, 00:16:46.948 "data_offset": 2048, 00:16:46.948 "data_size": 63488 00:16:46.948 }, 00:16:46.948 { 00:16:46.948 "name": "BaseBdev2", 00:16:46.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.948 "is_configured": false, 00:16:46.948 "data_offset": 0, 00:16:46.948 "data_size": 0 00:16:46.948 }, 00:16:46.948 { 00:16:46.948 "name": "BaseBdev3", 00:16:46.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.948 "is_configured": false, 00:16:46.948 "data_offset": 0, 00:16:46.948 "data_size": 0 00:16:46.948 }, 00:16:46.948 { 00:16:46.948 "name": "BaseBdev4", 00:16:46.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.948 "is_configured": false, 00:16:46.948 "data_offset": 0, 00:16:46.948 "data_size": 0 00:16:46.948 } 00:16:46.948 ] 00:16:46.948 }' 00:16:46.948 10:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:46.948 10:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.513 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:47.771 [2024-07-25 10:31:51.325517] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.771 [2024-07-25 10:31:51.325573] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10dde50 name Existed_Raid, state configuring 00:16:47.771 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:48.029 [2024-07-25 10:31:51.574234] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.029 [2024-07-25 10:31:51.575756] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.029 [2024-07-25 10:31:51.575792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.029 [2024-07-25 10:31:51.575805] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:48.029 [2024-07-25 10:31:51.575818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:48.029 [2024-07-25 10:31:51.575827] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:48.029 [2024-07-25 10:31:51.575840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.029 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.287 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.287 "name": "Existed_Raid", 00:16:48.287 "uuid": "93cabef1-d6d0-4470-9c0d-a3a15549bcf3", 00:16:48.287 "strip_size_kb": 64, 00:16:48.287 "state": "configuring", 00:16:48.287 "raid_level": "raid0", 00:16:48.287 "superblock": true, 00:16:48.287 "num_base_bdevs": 4, 00:16:48.287 "num_base_bdevs_discovered": 1, 00:16:48.287 "num_base_bdevs_operational": 4, 00:16:48.287 "base_bdevs_list": [ 00:16:48.287 { 00:16:48.287 "name": "BaseBdev1", 00:16:48.287 "uuid": "fa682f8f-05fc-4b73-ac37-79dc456fb587", 00:16:48.287 "is_configured": true, 00:16:48.287 "data_offset": 2048, 00:16:48.287 "data_size": 63488 00:16:48.287 }, 00:16:48.287 { 00:16:48.287 "name": "BaseBdev2", 00:16:48.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.287 "is_configured": false, 00:16:48.287 "data_offset": 0, 00:16:48.287 "data_size": 0 00:16:48.287 }, 00:16:48.287 { 00:16:48.287 "name": "BaseBdev3", 00:16:48.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.287 "is_configured": false, 00:16:48.287 "data_offset": 0, 00:16:48.288 "data_size": 0 00:16:48.288 }, 00:16:48.288 { 00:16:48.288 "name": "BaseBdev4", 00:16:48.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.288 "is_configured": false, 00:16:48.288 "data_offset": 0, 00:16:48.288 "data_size": 0 00:16:48.288 } 00:16:48.288 ] 00:16:48.288 }' 00:16:48.288 10:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.288 10:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.854 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:49.112 [2024-07-25 10:31:52.622809] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.112 BaseBdev2 00:16:49.112 10:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:49.112 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:49.112 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:49.112 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:49.112 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:49.112 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:49.112 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:49.370 10:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:49.628 [ 00:16:49.628 { 00:16:49.628 "name": "BaseBdev2", 00:16:49.628 "aliases": [ 00:16:49.628 "ba5abf36-edcb-47b8-ad31-cff7e8aeb0d9" 00:16:49.628 ], 00:16:49.628 "product_name": "Malloc disk", 00:16:49.628 "block_size": 512, 00:16:49.628 "num_blocks": 65536, 00:16:49.628 "uuid": "ba5abf36-edcb-47b8-ad31-cff7e8aeb0d9", 00:16:49.628 "assigned_rate_limits": { 00:16:49.628 "rw_ios_per_sec": 0, 00:16:49.628 "rw_mbytes_per_sec": 0, 00:16:49.628 "r_mbytes_per_sec": 0, 00:16:49.628 "w_mbytes_per_sec": 0 00:16:49.628 }, 00:16:49.628 "claimed": true, 00:16:49.628 "claim_type": "exclusive_write", 00:16:49.628 "zoned": false, 00:16:49.628 "supported_io_types": { 00:16:49.628 "read": true, 00:16:49.628 "write": true, 00:16:49.628 "unmap": true, 00:16:49.628 "flush": true, 00:16:49.628 "reset": true, 00:16:49.628 "nvme_admin": false, 00:16:49.628 "nvme_io": false, 00:16:49.628 "nvme_io_md": false, 00:16:49.628 "write_zeroes": true, 00:16:49.628 "zcopy": true, 00:16:49.628 "get_zone_info": false, 00:16:49.628 "zone_management": false, 00:16:49.628 "zone_append": false, 00:16:49.628 "compare": false, 00:16:49.628 "compare_and_write": false, 00:16:49.628 "abort": true, 00:16:49.629 "seek_hole": false, 00:16:49.629 "seek_data": false, 00:16:49.629 "copy": true, 00:16:49.629 "nvme_iov_md": false 00:16:49.629 }, 00:16:49.629 "memory_domains": [ 00:16:49.629 { 00:16:49.629 "dma_device_id": "system", 00:16:49.629 "dma_device_type": 1 00:16:49.629 }, 00:16:49.629 { 00:16:49.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.629 "dma_device_type": 2 00:16:49.629 } 00:16:49.629 ], 00:16:49.629 "driver_specific": {} 00:16:49.629 } 00:16:49.629 ] 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.629 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.890 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.890 "name": "Existed_Raid", 00:16:49.890 "uuid": "93cabef1-d6d0-4470-9c0d-a3a15549bcf3", 00:16:49.890 "strip_size_kb": 64, 00:16:49.890 "state": "configuring", 00:16:49.890 "raid_level": "raid0", 00:16:49.890 "superblock": true, 00:16:49.890 "num_base_bdevs": 4, 00:16:49.890 "num_base_bdevs_discovered": 2, 00:16:49.890 "num_base_bdevs_operational": 4, 00:16:49.890 "base_bdevs_list": [ 00:16:49.890 { 00:16:49.890 "name": "BaseBdev1", 00:16:49.890 "uuid": "fa682f8f-05fc-4b73-ac37-79dc456fb587", 00:16:49.890 "is_configured": true, 00:16:49.890 "data_offset": 2048, 00:16:49.890 "data_size": 63488 00:16:49.890 }, 00:16:49.890 { 00:16:49.890 "name": "BaseBdev2", 00:16:49.890 "uuid": "ba5abf36-edcb-47b8-ad31-cff7e8aeb0d9", 00:16:49.890 "is_configured": true, 00:16:49.890 "data_offset": 2048, 00:16:49.890 "data_size": 63488 00:16:49.890 }, 00:16:49.890 { 00:16:49.890 "name": "BaseBdev3", 00:16:49.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.890 "is_configured": false, 00:16:49.890 "data_offset": 0, 00:16:49.890 "data_size": 0 00:16:49.890 }, 00:16:49.890 { 00:16:49.890 "name": "BaseBdev4", 00:16:49.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.891 "is_configured": false, 00:16:49.891 "data_offset": 0, 00:16:49.891 "data_size": 0 00:16:49.891 } 00:16:49.891 ] 00:16:49.891 }' 00:16:49.891 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.891 10:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.455 10:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:50.713 [2024-07-25 10:31:54.185069] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.713 BaseBdev3 00:16:50.713 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:50.713 10:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:50.713 10:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:50.713 10:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:50.713 10:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:50.713 10:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:50.713 10:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:50.996 10:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:51.270 [ 00:16:51.270 { 00:16:51.270 "name": "BaseBdev3", 00:16:51.270 "aliases": [ 00:16:51.270 "040c1859-54f0-4e9c-bd1f-6c6b1952feb9" 00:16:51.270 ], 00:16:51.270 "product_name": "Malloc disk", 00:16:51.270 "block_size": 512, 00:16:51.270 "num_blocks": 65536, 00:16:51.270 "uuid": "040c1859-54f0-4e9c-bd1f-6c6b1952feb9", 00:16:51.270 "assigned_rate_limits": { 00:16:51.270 "rw_ios_per_sec": 0, 00:16:51.270 "rw_mbytes_per_sec": 0, 00:16:51.270 "r_mbytes_per_sec": 0, 00:16:51.270 "w_mbytes_per_sec": 0 00:16:51.270 }, 00:16:51.270 "claimed": true, 00:16:51.270 "claim_type": "exclusive_write", 00:16:51.270 "zoned": false, 00:16:51.270 "supported_io_types": { 00:16:51.270 "read": true, 00:16:51.270 "write": true, 00:16:51.270 "unmap": true, 00:16:51.270 "flush": true, 00:16:51.270 "reset": true, 00:16:51.270 "nvme_admin": false, 00:16:51.270 "nvme_io": false, 00:16:51.270 "nvme_io_md": false, 00:16:51.270 "write_zeroes": true, 00:16:51.270 "zcopy": true, 00:16:51.270 "get_zone_info": false, 00:16:51.270 "zone_management": false, 00:16:51.270 "zone_append": false, 00:16:51.270 "compare": false, 00:16:51.270 "compare_and_write": false, 00:16:51.270 "abort": true, 00:16:51.270 "seek_hole": false, 00:16:51.270 "seek_data": false, 00:16:51.270 "copy": true, 00:16:51.270 "nvme_iov_md": false 00:16:51.270 }, 00:16:51.270 "memory_domains": [ 00:16:51.270 { 00:16:51.270 "dma_device_id": "system", 00:16:51.270 "dma_device_type": 1 00:16:51.270 }, 00:16:51.270 { 00:16:51.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.270 "dma_device_type": 2 00:16:51.270 } 00:16:51.270 ], 00:16:51.270 "driver_specific": {} 00:16:51.270 } 00:16:51.270 ] 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.270 "name": "Existed_Raid", 00:16:51.270 "uuid": "93cabef1-d6d0-4470-9c0d-a3a15549bcf3", 00:16:51.270 "strip_size_kb": 64, 00:16:51.270 "state": "configuring", 00:16:51.270 "raid_level": "raid0", 00:16:51.270 "superblock": true, 00:16:51.270 "num_base_bdevs": 4, 00:16:51.270 "num_base_bdevs_discovered": 3, 00:16:51.270 "num_base_bdevs_operational": 4, 00:16:51.270 "base_bdevs_list": [ 00:16:51.270 { 00:16:51.270 "name": "BaseBdev1", 00:16:51.270 "uuid": "fa682f8f-05fc-4b73-ac37-79dc456fb587", 00:16:51.270 "is_configured": true, 00:16:51.270 "data_offset": 2048, 00:16:51.270 "data_size": 63488 00:16:51.270 }, 00:16:51.270 { 00:16:51.270 "name": "BaseBdev2", 00:16:51.270 "uuid": "ba5abf36-edcb-47b8-ad31-cff7e8aeb0d9", 00:16:51.270 "is_configured": true, 00:16:51.270 "data_offset": 2048, 00:16:51.270 "data_size": 63488 00:16:51.270 }, 00:16:51.270 { 00:16:51.270 "name": "BaseBdev3", 00:16:51.270 "uuid": "040c1859-54f0-4e9c-bd1f-6c6b1952feb9", 00:16:51.270 "is_configured": true, 00:16:51.270 "data_offset": 2048, 00:16:51.270 "data_size": 63488 00:16:51.270 }, 00:16:51.270 { 00:16:51.270 "name": "BaseBdev4", 00:16:51.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.270 "is_configured": false, 00:16:51.270 "data_offset": 0, 00:16:51.270 "data_size": 0 00:16:51.270 } 00:16:51.270 ] 00:16:51.270 }' 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.270 10:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.835 10:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:52.094 [2024-07-25 10:31:55.755546] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:52.094 [2024-07-25 10:31:55.755787] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x10decb0 00:16:52.094 [2024-07-25 10:31:55.755805] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:52.094 [2024-07-25 10:31:55.755982] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10f5c70 00:16:52.094 [2024-07-25 10:31:55.756159] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10decb0 00:16:52.094 [2024-07-25 10:31:55.756176] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10decb0 00:16:52.094 [2024-07-25 10:31:55.756282] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.094 BaseBdev4 00:16:52.094 10:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:52.094 10:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:52.094 10:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:52.094 10:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:52.094 10:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:52.094 10:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:52.094 10:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.352 10:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:52.610 [ 00:16:52.610 { 00:16:52.610 "name": "BaseBdev4", 00:16:52.610 "aliases": [ 00:16:52.610 "50d5f3d4-f610-43c9-a2ea-83a6367a11ee" 00:16:52.610 ], 00:16:52.610 "product_name": "Malloc disk", 00:16:52.610 "block_size": 512, 00:16:52.610 "num_blocks": 65536, 00:16:52.610 "uuid": "50d5f3d4-f610-43c9-a2ea-83a6367a11ee", 00:16:52.610 "assigned_rate_limits": { 00:16:52.610 "rw_ios_per_sec": 0, 00:16:52.610 "rw_mbytes_per_sec": 0, 00:16:52.610 "r_mbytes_per_sec": 0, 00:16:52.610 "w_mbytes_per_sec": 0 00:16:52.610 }, 00:16:52.610 "claimed": true, 00:16:52.610 "claim_type": "exclusive_write", 00:16:52.610 "zoned": false, 00:16:52.610 "supported_io_types": { 00:16:52.610 "read": true, 00:16:52.610 "write": true, 00:16:52.610 "unmap": true, 00:16:52.610 "flush": true, 00:16:52.610 "reset": true, 00:16:52.610 "nvme_admin": false, 00:16:52.610 "nvme_io": false, 00:16:52.610 "nvme_io_md": false, 00:16:52.610 "write_zeroes": true, 00:16:52.610 "zcopy": true, 00:16:52.610 "get_zone_info": false, 00:16:52.610 "zone_management": false, 00:16:52.610 "zone_append": false, 00:16:52.610 "compare": false, 00:16:52.610 "compare_and_write": false, 00:16:52.610 "abort": true, 00:16:52.610 "seek_hole": false, 00:16:52.610 "seek_data": false, 00:16:52.610 "copy": true, 00:16:52.610 "nvme_iov_md": false 00:16:52.610 }, 00:16:52.610 "memory_domains": [ 00:16:52.610 { 00:16:52.610 "dma_device_id": "system", 00:16:52.610 "dma_device_type": 1 00:16:52.610 }, 00:16:52.610 { 00:16:52.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.610 "dma_device_type": 2 00:16:52.610 } 00:16:52.610 ], 00:16:52.610 "driver_specific": {} 00:16:52.610 } 00:16:52.610 ] 00:16:52.610 10:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:52.610 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:52.611 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:52.611 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:52.611 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:52.611 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:52.611 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:52.611 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:52.611 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:52.611 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.611 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.611 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.611 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.611 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.611 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.869 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.869 "name": "Existed_Raid", 00:16:52.869 "uuid": "93cabef1-d6d0-4470-9c0d-a3a15549bcf3", 00:16:52.869 "strip_size_kb": 64, 00:16:52.869 "state": "online", 00:16:52.869 "raid_level": "raid0", 00:16:52.869 "superblock": true, 00:16:52.869 "num_base_bdevs": 4, 00:16:52.869 "num_base_bdevs_discovered": 4, 00:16:52.869 "num_base_bdevs_operational": 4, 00:16:52.869 "base_bdevs_list": [ 00:16:52.869 { 00:16:52.869 "name": "BaseBdev1", 00:16:52.869 "uuid": "fa682f8f-05fc-4b73-ac37-79dc456fb587", 00:16:52.869 "is_configured": true, 00:16:52.869 "data_offset": 2048, 00:16:52.869 "data_size": 63488 00:16:52.869 }, 00:16:52.869 { 00:16:52.869 "name": "BaseBdev2", 00:16:52.869 "uuid": "ba5abf36-edcb-47b8-ad31-cff7e8aeb0d9", 00:16:52.869 "is_configured": true, 00:16:52.869 "data_offset": 2048, 00:16:52.869 "data_size": 63488 00:16:52.869 }, 00:16:52.869 { 00:16:52.869 "name": "BaseBdev3", 00:16:52.869 "uuid": "040c1859-54f0-4e9c-bd1f-6c6b1952feb9", 00:16:52.869 "is_configured": true, 00:16:52.869 "data_offset": 2048, 00:16:52.869 "data_size": 63488 00:16:52.869 }, 00:16:52.869 { 00:16:52.869 "name": "BaseBdev4", 00:16:52.869 "uuid": "50d5f3d4-f610-43c9-a2ea-83a6367a11ee", 00:16:52.869 "is_configured": true, 00:16:52.869 "data_offset": 2048, 00:16:52.869 "data_size": 63488 00:16:52.869 } 00:16:52.869 ] 00:16:52.869 }' 00:16:52.869 10:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.869 10:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.434 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:53.434 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:53.434 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:53.434 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:53.434 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:53.434 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:53.434 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:53.434 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:53.692 [2024-07-25 10:31:57.271865] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.692 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:53.692 "name": "Existed_Raid", 00:16:53.692 "aliases": [ 00:16:53.692 "93cabef1-d6d0-4470-9c0d-a3a15549bcf3" 00:16:53.692 ], 00:16:53.692 "product_name": "Raid Volume", 00:16:53.692 "block_size": 512, 00:16:53.692 "num_blocks": 253952, 00:16:53.692 "uuid": "93cabef1-d6d0-4470-9c0d-a3a15549bcf3", 00:16:53.692 "assigned_rate_limits": { 00:16:53.692 "rw_ios_per_sec": 0, 00:16:53.692 "rw_mbytes_per_sec": 0, 00:16:53.692 "r_mbytes_per_sec": 0, 00:16:53.692 "w_mbytes_per_sec": 0 00:16:53.692 }, 00:16:53.692 "claimed": false, 00:16:53.692 "zoned": false, 00:16:53.692 "supported_io_types": { 00:16:53.692 "read": true, 00:16:53.692 "write": true, 00:16:53.692 "unmap": true, 00:16:53.692 "flush": true, 00:16:53.692 "reset": true, 00:16:53.692 "nvme_admin": false, 00:16:53.692 "nvme_io": false, 00:16:53.692 "nvme_io_md": false, 00:16:53.692 "write_zeroes": true, 00:16:53.692 "zcopy": false, 00:16:53.692 "get_zone_info": false, 00:16:53.692 "zone_management": false, 00:16:53.692 "zone_append": false, 00:16:53.692 "compare": false, 00:16:53.692 "compare_and_write": false, 00:16:53.692 "abort": false, 00:16:53.692 "seek_hole": false, 00:16:53.692 "seek_data": false, 00:16:53.692 "copy": false, 00:16:53.692 "nvme_iov_md": false 00:16:53.692 }, 00:16:53.692 "memory_domains": [ 00:16:53.692 { 00:16:53.692 "dma_device_id": "system", 00:16:53.692 "dma_device_type": 1 00:16:53.692 }, 00:16:53.692 { 00:16:53.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.692 "dma_device_type": 2 00:16:53.692 }, 00:16:53.692 { 00:16:53.692 "dma_device_id": "system", 00:16:53.692 "dma_device_type": 1 00:16:53.692 }, 00:16:53.692 { 00:16:53.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.692 "dma_device_type": 2 00:16:53.692 }, 00:16:53.692 { 00:16:53.692 "dma_device_id": "system", 00:16:53.692 "dma_device_type": 1 00:16:53.692 }, 00:16:53.692 { 00:16:53.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.692 "dma_device_type": 2 00:16:53.692 }, 00:16:53.692 { 00:16:53.692 "dma_device_id": "system", 00:16:53.692 "dma_device_type": 1 00:16:53.692 }, 00:16:53.692 { 00:16:53.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.692 "dma_device_type": 2 00:16:53.692 } 00:16:53.692 ], 00:16:53.692 "driver_specific": { 00:16:53.692 "raid": { 00:16:53.693 "uuid": "93cabef1-d6d0-4470-9c0d-a3a15549bcf3", 00:16:53.693 "strip_size_kb": 64, 00:16:53.693 "state": "online", 00:16:53.693 "raid_level": "raid0", 00:16:53.693 "superblock": true, 00:16:53.693 "num_base_bdevs": 4, 00:16:53.693 "num_base_bdevs_discovered": 4, 00:16:53.693 "num_base_bdevs_operational": 4, 00:16:53.693 "base_bdevs_list": [ 00:16:53.693 { 00:16:53.693 "name": "BaseBdev1", 00:16:53.693 "uuid": "fa682f8f-05fc-4b73-ac37-79dc456fb587", 00:16:53.693 "is_configured": true, 00:16:53.693 "data_offset": 2048, 00:16:53.693 "data_size": 63488 00:16:53.693 }, 00:16:53.693 { 00:16:53.693 "name": "BaseBdev2", 00:16:53.693 "uuid": "ba5abf36-edcb-47b8-ad31-cff7e8aeb0d9", 00:16:53.693 "is_configured": true, 00:16:53.693 "data_offset": 2048, 00:16:53.693 "data_size": 63488 00:16:53.693 }, 00:16:53.693 { 00:16:53.693 "name": "BaseBdev3", 00:16:53.693 "uuid": "040c1859-54f0-4e9c-bd1f-6c6b1952feb9", 00:16:53.693 "is_configured": true, 00:16:53.693 "data_offset": 2048, 00:16:53.693 "data_size": 63488 00:16:53.693 }, 00:16:53.693 { 00:16:53.693 "name": "BaseBdev4", 00:16:53.693 "uuid": "50d5f3d4-f610-43c9-a2ea-83a6367a11ee", 00:16:53.693 "is_configured": true, 00:16:53.693 "data_offset": 2048, 00:16:53.693 "data_size": 63488 00:16:53.693 } 00:16:53.693 ] 00:16:53.693 } 00:16:53.693 } 00:16:53.693 }' 00:16:53.693 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:53.693 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:53.693 BaseBdev2 00:16:53.693 BaseBdev3 00:16:53.693 BaseBdev4' 00:16:53.693 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:53.693 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:53.693 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:53.951 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:53.951 "name": "BaseBdev1", 00:16:53.951 "aliases": [ 00:16:53.951 "fa682f8f-05fc-4b73-ac37-79dc456fb587" 00:16:53.951 ], 00:16:53.951 "product_name": "Malloc disk", 00:16:53.951 "block_size": 512, 00:16:53.951 "num_blocks": 65536, 00:16:53.951 "uuid": "fa682f8f-05fc-4b73-ac37-79dc456fb587", 00:16:53.951 "assigned_rate_limits": { 00:16:53.951 "rw_ios_per_sec": 0, 00:16:53.951 "rw_mbytes_per_sec": 0, 00:16:53.951 "r_mbytes_per_sec": 0, 00:16:53.951 "w_mbytes_per_sec": 0 00:16:53.951 }, 00:16:53.951 "claimed": true, 00:16:53.951 "claim_type": "exclusive_write", 00:16:53.951 "zoned": false, 00:16:53.951 "supported_io_types": { 00:16:53.951 "read": true, 00:16:53.951 "write": true, 00:16:53.951 "unmap": true, 00:16:53.951 "flush": true, 00:16:53.951 "reset": true, 00:16:53.951 "nvme_admin": false, 00:16:53.951 "nvme_io": false, 00:16:53.951 "nvme_io_md": false, 00:16:53.951 "write_zeroes": true, 00:16:53.951 "zcopy": true, 00:16:53.951 "get_zone_info": false, 00:16:53.951 "zone_management": false, 00:16:53.951 "zone_append": false, 00:16:53.951 "compare": false, 00:16:53.951 "compare_and_write": false, 00:16:53.951 "abort": true, 00:16:53.951 "seek_hole": false, 00:16:53.951 "seek_data": false, 00:16:53.951 "copy": true, 00:16:53.951 "nvme_iov_md": false 00:16:53.951 }, 00:16:53.951 "memory_domains": [ 00:16:53.951 { 00:16:53.951 "dma_device_id": "system", 00:16:53.951 "dma_device_type": 1 00:16:53.951 }, 00:16:53.951 { 00:16:53.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.951 "dma_device_type": 2 00:16:53.951 } 00:16:53.951 ], 00:16:53.951 "driver_specific": {} 00:16:53.951 }' 00:16:53.951 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.209 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.209 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:54.209 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.209 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.209 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:54.209 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.209 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.209 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:54.209 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.209 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.466 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:54.466 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:54.466 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:54.466 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:54.724 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:54.724 "name": "BaseBdev2", 00:16:54.724 "aliases": [ 00:16:54.724 "ba5abf36-edcb-47b8-ad31-cff7e8aeb0d9" 00:16:54.724 ], 00:16:54.724 "product_name": "Malloc disk", 00:16:54.724 "block_size": 512, 00:16:54.724 "num_blocks": 65536, 00:16:54.724 "uuid": "ba5abf36-edcb-47b8-ad31-cff7e8aeb0d9", 00:16:54.724 "assigned_rate_limits": { 00:16:54.724 "rw_ios_per_sec": 0, 00:16:54.724 "rw_mbytes_per_sec": 0, 00:16:54.724 "r_mbytes_per_sec": 0, 00:16:54.724 "w_mbytes_per_sec": 0 00:16:54.724 }, 00:16:54.724 "claimed": true, 00:16:54.724 "claim_type": "exclusive_write", 00:16:54.724 "zoned": false, 00:16:54.724 "supported_io_types": { 00:16:54.724 "read": true, 00:16:54.724 "write": true, 00:16:54.724 "unmap": true, 00:16:54.724 "flush": true, 00:16:54.724 "reset": true, 00:16:54.724 "nvme_admin": false, 00:16:54.724 "nvme_io": false, 00:16:54.724 "nvme_io_md": false, 00:16:54.724 "write_zeroes": true, 00:16:54.724 "zcopy": true, 00:16:54.724 "get_zone_info": false, 00:16:54.724 "zone_management": false, 00:16:54.724 "zone_append": false, 00:16:54.724 "compare": false, 00:16:54.724 "compare_and_write": false, 00:16:54.724 "abort": true, 00:16:54.724 "seek_hole": false, 00:16:54.724 "seek_data": false, 00:16:54.724 "copy": true, 00:16:54.724 "nvme_iov_md": false 00:16:54.724 }, 00:16:54.724 "memory_domains": [ 00:16:54.724 { 00:16:54.724 "dma_device_id": "system", 00:16:54.724 "dma_device_type": 1 00:16:54.724 }, 00:16:54.724 { 00:16:54.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.724 "dma_device_type": 2 00:16:54.724 } 00:16:54.724 ], 00:16:54.724 "driver_specific": {} 00:16:54.724 }' 00:16:54.724 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.724 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.724 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:54.724 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.724 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.724 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:54.724 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.724 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.724 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:54.724 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.982 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.982 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:54.982 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:54.982 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:54.982 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:55.240 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:55.240 "name": "BaseBdev3", 00:16:55.240 "aliases": [ 00:16:55.240 "040c1859-54f0-4e9c-bd1f-6c6b1952feb9" 00:16:55.240 ], 00:16:55.240 "product_name": "Malloc disk", 00:16:55.240 "block_size": 512, 00:16:55.240 "num_blocks": 65536, 00:16:55.240 "uuid": "040c1859-54f0-4e9c-bd1f-6c6b1952feb9", 00:16:55.240 "assigned_rate_limits": { 00:16:55.240 "rw_ios_per_sec": 0, 00:16:55.240 "rw_mbytes_per_sec": 0, 00:16:55.240 "r_mbytes_per_sec": 0, 00:16:55.240 "w_mbytes_per_sec": 0 00:16:55.240 }, 00:16:55.240 "claimed": true, 00:16:55.240 "claim_type": "exclusive_write", 00:16:55.240 "zoned": false, 00:16:55.240 "supported_io_types": { 00:16:55.240 "read": true, 00:16:55.240 "write": true, 00:16:55.240 "unmap": true, 00:16:55.240 "flush": true, 00:16:55.240 "reset": true, 00:16:55.240 "nvme_admin": false, 00:16:55.240 "nvme_io": false, 00:16:55.240 "nvme_io_md": false, 00:16:55.240 "write_zeroes": true, 00:16:55.240 "zcopy": true, 00:16:55.240 "get_zone_info": false, 00:16:55.240 "zone_management": false, 00:16:55.240 "zone_append": false, 00:16:55.240 "compare": false, 00:16:55.240 "compare_and_write": false, 00:16:55.240 "abort": true, 00:16:55.240 "seek_hole": false, 00:16:55.240 "seek_data": false, 00:16:55.240 "copy": true, 00:16:55.240 "nvme_iov_md": false 00:16:55.240 }, 00:16:55.240 "memory_domains": [ 00:16:55.240 { 00:16:55.240 "dma_device_id": "system", 00:16:55.240 "dma_device_type": 1 00:16:55.240 }, 00:16:55.240 { 00:16:55.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.240 "dma_device_type": 2 00:16:55.240 } 00:16:55.240 ], 00:16:55.240 "driver_specific": {} 00:16:55.240 }' 00:16:55.240 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.240 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.240 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:55.240 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.240 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.240 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:55.240 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.240 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.240 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:55.240 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.498 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.498 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:55.498 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:55.498 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:55.498 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:55.756 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:55.756 "name": "BaseBdev4", 00:16:55.756 "aliases": [ 00:16:55.756 "50d5f3d4-f610-43c9-a2ea-83a6367a11ee" 00:16:55.756 ], 00:16:55.756 "product_name": "Malloc disk", 00:16:55.756 "block_size": 512, 00:16:55.756 "num_blocks": 65536, 00:16:55.756 "uuid": "50d5f3d4-f610-43c9-a2ea-83a6367a11ee", 00:16:55.756 "assigned_rate_limits": { 00:16:55.756 "rw_ios_per_sec": 0, 00:16:55.756 "rw_mbytes_per_sec": 0, 00:16:55.756 "r_mbytes_per_sec": 0, 00:16:55.756 "w_mbytes_per_sec": 0 00:16:55.756 }, 00:16:55.756 "claimed": true, 00:16:55.756 "claim_type": "exclusive_write", 00:16:55.756 "zoned": false, 00:16:55.756 "supported_io_types": { 00:16:55.756 "read": true, 00:16:55.756 "write": true, 00:16:55.756 "unmap": true, 00:16:55.756 "flush": true, 00:16:55.756 "reset": true, 00:16:55.756 "nvme_admin": false, 00:16:55.756 "nvme_io": false, 00:16:55.756 "nvme_io_md": false, 00:16:55.756 "write_zeroes": true, 00:16:55.756 "zcopy": true, 00:16:55.756 "get_zone_info": false, 00:16:55.756 "zone_management": false, 00:16:55.756 "zone_append": false, 00:16:55.756 "compare": false, 00:16:55.756 "compare_and_write": false, 00:16:55.756 "abort": true, 00:16:55.756 "seek_hole": false, 00:16:55.756 "seek_data": false, 00:16:55.756 "copy": true, 00:16:55.756 "nvme_iov_md": false 00:16:55.756 }, 00:16:55.756 "memory_domains": [ 00:16:55.756 { 00:16:55.756 "dma_device_id": "system", 00:16:55.756 "dma_device_type": 1 00:16:55.756 }, 00:16:55.756 { 00:16:55.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.756 "dma_device_type": 2 00:16:55.756 } 00:16:55.756 ], 00:16:55.756 "driver_specific": {} 00:16:55.756 }' 00:16:55.756 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.756 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.756 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:55.756 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.756 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.756 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:55.756 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.756 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:56.014 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:56.014 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:56.014 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:56.014 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:56.014 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:56.272 [2024-07-25 10:31:59.774299] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:56.272 [2024-07-25 10:31:59.774338] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.272 [2024-07-25 10:31:59.774412] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.273 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.530 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.530 "name": "Existed_Raid", 00:16:56.530 "uuid": "93cabef1-d6d0-4470-9c0d-a3a15549bcf3", 00:16:56.530 "strip_size_kb": 64, 00:16:56.530 "state": "offline", 00:16:56.530 "raid_level": "raid0", 00:16:56.530 "superblock": true, 00:16:56.530 "num_base_bdevs": 4, 00:16:56.530 "num_base_bdevs_discovered": 3, 00:16:56.530 "num_base_bdevs_operational": 3, 00:16:56.530 "base_bdevs_list": [ 00:16:56.530 { 00:16:56.530 "name": null, 00:16:56.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.530 "is_configured": false, 00:16:56.530 "data_offset": 2048, 00:16:56.530 "data_size": 63488 00:16:56.530 }, 00:16:56.530 { 00:16:56.530 "name": "BaseBdev2", 00:16:56.530 "uuid": "ba5abf36-edcb-47b8-ad31-cff7e8aeb0d9", 00:16:56.530 "is_configured": true, 00:16:56.530 "data_offset": 2048, 00:16:56.530 "data_size": 63488 00:16:56.530 }, 00:16:56.530 { 00:16:56.531 "name": "BaseBdev3", 00:16:56.531 "uuid": "040c1859-54f0-4e9c-bd1f-6c6b1952feb9", 00:16:56.531 "is_configured": true, 00:16:56.531 "data_offset": 2048, 00:16:56.531 "data_size": 63488 00:16:56.531 }, 00:16:56.531 { 00:16:56.531 "name": "BaseBdev4", 00:16:56.531 "uuid": "50d5f3d4-f610-43c9-a2ea-83a6367a11ee", 00:16:56.531 "is_configured": true, 00:16:56.531 "data_offset": 2048, 00:16:56.531 "data_size": 63488 00:16:56.531 } 00:16:56.531 ] 00:16:56.531 }' 00:16:56.531 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.531 10:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.096 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:57.096 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:57.096 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.096 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:57.353 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:57.353 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.354 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:57.612 [2024-07-25 10:32:01.084597] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:57.612 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:57.612 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:57.612 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.612 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:57.869 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:57.869 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.869 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:58.128 [2024-07-25 10:32:01.644002] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:58.128 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:58.128 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:58.128 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.128 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:58.386 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:58.386 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:58.386 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:58.645 [2024-07-25 10:32:02.206992] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:58.645 [2024-07-25 10:32:02.207059] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10decb0 name Existed_Raid, state offline 00:16:58.645 10:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:58.645 10:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:58.645 10:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.645 10:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:58.902 10:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:58.902 10:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:58.902 10:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:58.902 10:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:58.902 10:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:58.902 10:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:59.158 BaseBdev2 00:16:59.158 10:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:59.158 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:59.158 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:59.158 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:59.158 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:59.158 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:59.158 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:59.415 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:59.672 [ 00:16:59.672 { 00:16:59.672 "name": "BaseBdev2", 00:16:59.672 "aliases": [ 00:16:59.672 "8971da79-5996-4a19-a378-72e9617f2a2a" 00:16:59.672 ], 00:16:59.672 "product_name": "Malloc disk", 00:16:59.672 "block_size": 512, 00:16:59.672 "num_blocks": 65536, 00:16:59.672 "uuid": "8971da79-5996-4a19-a378-72e9617f2a2a", 00:16:59.672 "assigned_rate_limits": { 00:16:59.672 "rw_ios_per_sec": 0, 00:16:59.672 "rw_mbytes_per_sec": 0, 00:16:59.672 "r_mbytes_per_sec": 0, 00:16:59.672 "w_mbytes_per_sec": 0 00:16:59.672 }, 00:16:59.672 "claimed": false, 00:16:59.672 "zoned": false, 00:16:59.672 "supported_io_types": { 00:16:59.672 "read": true, 00:16:59.672 "write": true, 00:16:59.672 "unmap": true, 00:16:59.672 "flush": true, 00:16:59.672 "reset": true, 00:16:59.672 "nvme_admin": false, 00:16:59.672 "nvme_io": false, 00:16:59.672 "nvme_io_md": false, 00:16:59.672 "write_zeroes": true, 00:16:59.672 "zcopy": true, 00:16:59.672 "get_zone_info": false, 00:16:59.672 "zone_management": false, 00:16:59.672 "zone_append": false, 00:16:59.672 "compare": false, 00:16:59.672 "compare_and_write": false, 00:16:59.672 "abort": true, 00:16:59.672 "seek_hole": false, 00:16:59.672 "seek_data": false, 00:16:59.672 "copy": true, 00:16:59.672 "nvme_iov_md": false 00:16:59.672 }, 00:16:59.672 "memory_domains": [ 00:16:59.672 { 00:16:59.672 "dma_device_id": "system", 00:16:59.672 "dma_device_type": 1 00:16:59.672 }, 00:16:59.672 { 00:16:59.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.672 "dma_device_type": 2 00:16:59.672 } 00:16:59.672 ], 00:16:59.672 "driver_specific": {} 00:16:59.672 } 00:16:59.672 ] 00:16:59.672 10:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:59.672 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:59.672 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:59.672 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:59.929 BaseBdev3 00:16:59.929 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:59.929 10:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:59.929 10:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:59.929 10:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:59.929 10:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:59.929 10:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:59.929 10:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.188 10:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:00.446 [ 00:17:00.446 { 00:17:00.446 "name": "BaseBdev3", 00:17:00.446 "aliases": [ 00:17:00.446 "43eb8b20-69be-49a4-99d9-887d1def2d57" 00:17:00.446 ], 00:17:00.446 "product_name": "Malloc disk", 00:17:00.446 "block_size": 512, 00:17:00.446 "num_blocks": 65536, 00:17:00.446 "uuid": "43eb8b20-69be-49a4-99d9-887d1def2d57", 00:17:00.446 "assigned_rate_limits": { 00:17:00.446 "rw_ios_per_sec": 0, 00:17:00.446 "rw_mbytes_per_sec": 0, 00:17:00.446 "r_mbytes_per_sec": 0, 00:17:00.446 "w_mbytes_per_sec": 0 00:17:00.446 }, 00:17:00.446 "claimed": false, 00:17:00.446 "zoned": false, 00:17:00.446 "supported_io_types": { 00:17:00.446 "read": true, 00:17:00.446 "write": true, 00:17:00.446 "unmap": true, 00:17:00.446 "flush": true, 00:17:00.446 "reset": true, 00:17:00.446 "nvme_admin": false, 00:17:00.446 "nvme_io": false, 00:17:00.446 "nvme_io_md": false, 00:17:00.446 "write_zeroes": true, 00:17:00.446 "zcopy": true, 00:17:00.446 "get_zone_info": false, 00:17:00.446 "zone_management": false, 00:17:00.446 "zone_append": false, 00:17:00.446 "compare": false, 00:17:00.446 "compare_and_write": false, 00:17:00.446 "abort": true, 00:17:00.446 "seek_hole": false, 00:17:00.446 "seek_data": false, 00:17:00.446 "copy": true, 00:17:00.446 "nvme_iov_md": false 00:17:00.446 }, 00:17:00.446 "memory_domains": [ 00:17:00.446 { 00:17:00.446 "dma_device_id": "system", 00:17:00.446 "dma_device_type": 1 00:17:00.446 }, 00:17:00.446 { 00:17:00.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.446 "dma_device_type": 2 00:17:00.446 } 00:17:00.446 ], 00:17:00.446 "driver_specific": {} 00:17:00.446 } 00:17:00.446 ] 00:17:00.446 10:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:00.446 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:00.446 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:00.446 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:00.704 BaseBdev4 00:17:00.704 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:00.704 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:00.704 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:00.704 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:00.704 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:00.704 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:00.704 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.962 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:01.219 [ 00:17:01.219 { 00:17:01.219 "name": "BaseBdev4", 00:17:01.219 "aliases": [ 00:17:01.219 "13cc7eda-cbc2-44e2-b04c-0329fe570d09" 00:17:01.219 ], 00:17:01.219 "product_name": "Malloc disk", 00:17:01.219 "block_size": 512, 00:17:01.219 "num_blocks": 65536, 00:17:01.219 "uuid": "13cc7eda-cbc2-44e2-b04c-0329fe570d09", 00:17:01.219 "assigned_rate_limits": { 00:17:01.219 "rw_ios_per_sec": 0, 00:17:01.219 "rw_mbytes_per_sec": 0, 00:17:01.219 "r_mbytes_per_sec": 0, 00:17:01.219 "w_mbytes_per_sec": 0 00:17:01.219 }, 00:17:01.219 "claimed": false, 00:17:01.219 "zoned": false, 00:17:01.219 "supported_io_types": { 00:17:01.219 "read": true, 00:17:01.219 "write": true, 00:17:01.219 "unmap": true, 00:17:01.219 "flush": true, 00:17:01.219 "reset": true, 00:17:01.219 "nvme_admin": false, 00:17:01.219 "nvme_io": false, 00:17:01.219 "nvme_io_md": false, 00:17:01.219 "write_zeroes": true, 00:17:01.219 "zcopy": true, 00:17:01.219 "get_zone_info": false, 00:17:01.219 "zone_management": false, 00:17:01.219 "zone_append": false, 00:17:01.219 "compare": false, 00:17:01.219 "compare_and_write": false, 00:17:01.219 "abort": true, 00:17:01.219 "seek_hole": false, 00:17:01.219 "seek_data": false, 00:17:01.219 "copy": true, 00:17:01.220 "nvme_iov_md": false 00:17:01.220 }, 00:17:01.220 "memory_domains": [ 00:17:01.220 { 00:17:01.220 "dma_device_id": "system", 00:17:01.220 "dma_device_type": 1 00:17:01.220 }, 00:17:01.220 { 00:17:01.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.220 "dma_device_type": 2 00:17:01.220 } 00:17:01.220 ], 00:17:01.220 "driver_specific": {} 00:17:01.220 } 00:17:01.220 ] 00:17:01.220 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:01.220 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:01.220 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:01.220 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:01.477 [2024-07-25 10:32:04.957729] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.477 [2024-07-25 10:32:04.957770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.477 [2024-07-25 10:32:04.957815] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.477 [2024-07-25 10:32:04.959204] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.477 [2024-07-25 10:32:04.959250] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:01.477 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:01.477 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:01.477 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:01.477 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:01.477 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:01.477 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:01.477 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:01.477 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:01.477 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:01.477 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:01.477 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.477 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.734 10:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:01.734 "name": "Existed_Raid", 00:17:01.734 "uuid": "fb403512-7387-4700-bc6e-04ae39c17c1b", 00:17:01.734 "strip_size_kb": 64, 00:17:01.734 "state": "configuring", 00:17:01.734 "raid_level": "raid0", 00:17:01.734 "superblock": true, 00:17:01.734 "num_base_bdevs": 4, 00:17:01.734 "num_base_bdevs_discovered": 3, 00:17:01.734 "num_base_bdevs_operational": 4, 00:17:01.734 "base_bdevs_list": [ 00:17:01.734 { 00:17:01.734 "name": "BaseBdev1", 00:17:01.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.734 "is_configured": false, 00:17:01.734 "data_offset": 0, 00:17:01.734 "data_size": 0 00:17:01.734 }, 00:17:01.734 { 00:17:01.734 "name": "BaseBdev2", 00:17:01.734 "uuid": "8971da79-5996-4a19-a378-72e9617f2a2a", 00:17:01.734 "is_configured": true, 00:17:01.734 "data_offset": 2048, 00:17:01.734 "data_size": 63488 00:17:01.734 }, 00:17:01.734 { 00:17:01.734 "name": "BaseBdev3", 00:17:01.734 "uuid": "43eb8b20-69be-49a4-99d9-887d1def2d57", 00:17:01.734 "is_configured": true, 00:17:01.734 "data_offset": 2048, 00:17:01.734 "data_size": 63488 00:17:01.734 }, 00:17:01.734 { 00:17:01.734 "name": "BaseBdev4", 00:17:01.734 "uuid": "13cc7eda-cbc2-44e2-b04c-0329fe570d09", 00:17:01.734 "is_configured": true, 00:17:01.734 "data_offset": 2048, 00:17:01.734 "data_size": 63488 00:17:01.734 } 00:17:01.734 ] 00:17:01.734 }' 00:17:01.734 10:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:01.734 10:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.299 10:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:02.557 [2024-07-25 10:32:06.008532] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:02.557 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:02.557 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:02.557 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:02.557 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:02.557 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:02.557 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:02.557 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.557 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.557 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.558 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.558 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.558 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.815 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.815 "name": "Existed_Raid", 00:17:02.815 "uuid": "fb403512-7387-4700-bc6e-04ae39c17c1b", 00:17:02.815 "strip_size_kb": 64, 00:17:02.815 "state": "configuring", 00:17:02.815 "raid_level": "raid0", 00:17:02.815 "superblock": true, 00:17:02.815 "num_base_bdevs": 4, 00:17:02.815 "num_base_bdevs_discovered": 2, 00:17:02.815 "num_base_bdevs_operational": 4, 00:17:02.815 "base_bdevs_list": [ 00:17:02.815 { 00:17:02.815 "name": "BaseBdev1", 00:17:02.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.815 "is_configured": false, 00:17:02.815 "data_offset": 0, 00:17:02.815 "data_size": 0 00:17:02.815 }, 00:17:02.815 { 00:17:02.815 "name": null, 00:17:02.815 "uuid": "8971da79-5996-4a19-a378-72e9617f2a2a", 00:17:02.815 "is_configured": false, 00:17:02.815 "data_offset": 2048, 00:17:02.815 "data_size": 63488 00:17:02.815 }, 00:17:02.815 { 00:17:02.815 "name": "BaseBdev3", 00:17:02.815 "uuid": "43eb8b20-69be-49a4-99d9-887d1def2d57", 00:17:02.815 "is_configured": true, 00:17:02.815 "data_offset": 2048, 00:17:02.815 "data_size": 63488 00:17:02.815 }, 00:17:02.815 { 00:17:02.815 "name": "BaseBdev4", 00:17:02.815 "uuid": "13cc7eda-cbc2-44e2-b04c-0329fe570d09", 00:17:02.815 "is_configured": true, 00:17:02.815 "data_offset": 2048, 00:17:02.815 "data_size": 63488 00:17:02.815 } 00:17:02.815 ] 00:17:02.815 }' 00:17:02.815 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.815 10:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.382 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.382 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:03.382 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:03.382 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:03.640 [2024-07-25 10:32:07.322007] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.640 BaseBdev1 00:17:03.640 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:03.640 10:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:03.640 10:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:03.640 10:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:03.640 10:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:03.640 10:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:03.640 10:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:03.897 10:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:04.154 [ 00:17:04.154 { 00:17:04.154 "name": "BaseBdev1", 00:17:04.154 "aliases": [ 00:17:04.154 "2915614a-0674-4571-9ba5-3573badaa46d" 00:17:04.154 ], 00:17:04.154 "product_name": "Malloc disk", 00:17:04.154 "block_size": 512, 00:17:04.154 "num_blocks": 65536, 00:17:04.154 "uuid": "2915614a-0674-4571-9ba5-3573badaa46d", 00:17:04.154 "assigned_rate_limits": { 00:17:04.154 "rw_ios_per_sec": 0, 00:17:04.154 "rw_mbytes_per_sec": 0, 00:17:04.154 "r_mbytes_per_sec": 0, 00:17:04.154 "w_mbytes_per_sec": 0 00:17:04.154 }, 00:17:04.154 "claimed": true, 00:17:04.154 "claim_type": "exclusive_write", 00:17:04.154 "zoned": false, 00:17:04.154 "supported_io_types": { 00:17:04.154 "read": true, 00:17:04.154 "write": true, 00:17:04.154 "unmap": true, 00:17:04.154 "flush": true, 00:17:04.154 "reset": true, 00:17:04.154 "nvme_admin": false, 00:17:04.154 "nvme_io": false, 00:17:04.154 "nvme_io_md": false, 00:17:04.154 "write_zeroes": true, 00:17:04.154 "zcopy": true, 00:17:04.154 "get_zone_info": false, 00:17:04.154 "zone_management": false, 00:17:04.154 "zone_append": false, 00:17:04.154 "compare": false, 00:17:04.154 "compare_and_write": false, 00:17:04.154 "abort": true, 00:17:04.154 "seek_hole": false, 00:17:04.154 "seek_data": false, 00:17:04.154 "copy": true, 00:17:04.154 "nvme_iov_md": false 00:17:04.154 }, 00:17:04.154 "memory_domains": [ 00:17:04.154 { 00:17:04.154 "dma_device_id": "system", 00:17:04.154 "dma_device_type": 1 00:17:04.154 }, 00:17:04.154 { 00:17:04.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.154 "dma_device_type": 2 00:17:04.154 } 00:17:04.154 ], 00:17:04.154 "driver_specific": {} 00:17:04.154 } 00:17:04.154 ] 00:17:04.154 10:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:04.154 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:04.154 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:04.154 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:04.154 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:04.154 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:04.154 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:04.154 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:04.154 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:04.154 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:04.154 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:04.154 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.155 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.412 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:04.412 "name": "Existed_Raid", 00:17:04.412 "uuid": "fb403512-7387-4700-bc6e-04ae39c17c1b", 00:17:04.412 "strip_size_kb": 64, 00:17:04.412 "state": "configuring", 00:17:04.412 "raid_level": "raid0", 00:17:04.412 "superblock": true, 00:17:04.412 "num_base_bdevs": 4, 00:17:04.412 "num_base_bdevs_discovered": 3, 00:17:04.412 "num_base_bdevs_operational": 4, 00:17:04.412 "base_bdevs_list": [ 00:17:04.412 { 00:17:04.412 "name": "BaseBdev1", 00:17:04.412 "uuid": "2915614a-0674-4571-9ba5-3573badaa46d", 00:17:04.412 "is_configured": true, 00:17:04.412 "data_offset": 2048, 00:17:04.412 "data_size": 63488 00:17:04.412 }, 00:17:04.412 { 00:17:04.412 "name": null, 00:17:04.412 "uuid": "8971da79-5996-4a19-a378-72e9617f2a2a", 00:17:04.412 "is_configured": false, 00:17:04.412 "data_offset": 2048, 00:17:04.412 "data_size": 63488 00:17:04.412 }, 00:17:04.412 { 00:17:04.412 "name": "BaseBdev3", 00:17:04.412 "uuid": "43eb8b20-69be-49a4-99d9-887d1def2d57", 00:17:04.412 "is_configured": true, 00:17:04.412 "data_offset": 2048, 00:17:04.412 "data_size": 63488 00:17:04.412 }, 00:17:04.412 { 00:17:04.412 "name": "BaseBdev4", 00:17:04.412 "uuid": "13cc7eda-cbc2-44e2-b04c-0329fe570d09", 00:17:04.412 "is_configured": true, 00:17:04.412 "data_offset": 2048, 00:17:04.412 "data_size": 63488 00:17:04.412 } 00:17:04.412 ] 00:17:04.412 }' 00:17:04.412 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:04.412 10:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.978 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.978 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:05.544 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:05.544 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:05.544 [2024-07-25 10:32:09.186977] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:05.544 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:05.544 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:05.544 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:05.544 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:05.544 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:05.544 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:05.544 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.544 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.544 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.544 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.544 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.544 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.802 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.802 "name": "Existed_Raid", 00:17:05.802 "uuid": "fb403512-7387-4700-bc6e-04ae39c17c1b", 00:17:05.802 "strip_size_kb": 64, 00:17:05.802 "state": "configuring", 00:17:05.802 "raid_level": "raid0", 00:17:05.802 "superblock": true, 00:17:05.802 "num_base_bdevs": 4, 00:17:05.802 "num_base_bdevs_discovered": 2, 00:17:05.802 "num_base_bdevs_operational": 4, 00:17:05.802 "base_bdevs_list": [ 00:17:05.802 { 00:17:05.802 "name": "BaseBdev1", 00:17:05.802 "uuid": "2915614a-0674-4571-9ba5-3573badaa46d", 00:17:05.802 "is_configured": true, 00:17:05.802 "data_offset": 2048, 00:17:05.802 "data_size": 63488 00:17:05.802 }, 00:17:05.802 { 00:17:05.802 "name": null, 00:17:05.802 "uuid": "8971da79-5996-4a19-a378-72e9617f2a2a", 00:17:05.802 "is_configured": false, 00:17:05.802 "data_offset": 2048, 00:17:05.802 "data_size": 63488 00:17:05.802 }, 00:17:05.802 { 00:17:05.802 "name": null, 00:17:05.802 "uuid": "43eb8b20-69be-49a4-99d9-887d1def2d57", 00:17:05.802 "is_configured": false, 00:17:05.802 "data_offset": 2048, 00:17:05.802 "data_size": 63488 00:17:05.802 }, 00:17:05.802 { 00:17:05.802 "name": "BaseBdev4", 00:17:05.802 "uuid": "13cc7eda-cbc2-44e2-b04c-0329fe570d09", 00:17:05.802 "is_configured": true, 00:17:05.802 "data_offset": 2048, 00:17:05.802 "data_size": 63488 00:17:05.802 } 00:17:05.802 ] 00:17:05.802 }' 00:17:05.802 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.802 10:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.368 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.368 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:06.625 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:06.625 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:06.883 [2024-07-25 10:32:10.534585] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.883 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:06.883 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:06.883 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:06.883 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:06.883 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:06.883 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:06.883 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.883 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.883 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.883 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.883 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.883 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.146 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.146 "name": "Existed_Raid", 00:17:07.146 "uuid": "fb403512-7387-4700-bc6e-04ae39c17c1b", 00:17:07.146 "strip_size_kb": 64, 00:17:07.146 "state": "configuring", 00:17:07.146 "raid_level": "raid0", 00:17:07.146 "superblock": true, 00:17:07.146 "num_base_bdevs": 4, 00:17:07.146 "num_base_bdevs_discovered": 3, 00:17:07.146 "num_base_bdevs_operational": 4, 00:17:07.146 "base_bdevs_list": [ 00:17:07.146 { 00:17:07.146 "name": "BaseBdev1", 00:17:07.146 "uuid": "2915614a-0674-4571-9ba5-3573badaa46d", 00:17:07.146 "is_configured": true, 00:17:07.146 "data_offset": 2048, 00:17:07.146 "data_size": 63488 00:17:07.146 }, 00:17:07.146 { 00:17:07.146 "name": null, 00:17:07.146 "uuid": "8971da79-5996-4a19-a378-72e9617f2a2a", 00:17:07.146 "is_configured": false, 00:17:07.146 "data_offset": 2048, 00:17:07.146 "data_size": 63488 00:17:07.146 }, 00:17:07.146 { 00:17:07.146 "name": "BaseBdev3", 00:17:07.146 "uuid": "43eb8b20-69be-49a4-99d9-887d1def2d57", 00:17:07.146 "is_configured": true, 00:17:07.146 "data_offset": 2048, 00:17:07.146 "data_size": 63488 00:17:07.146 }, 00:17:07.146 { 00:17:07.146 "name": "BaseBdev4", 00:17:07.146 "uuid": "13cc7eda-cbc2-44e2-b04c-0329fe570d09", 00:17:07.146 "is_configured": true, 00:17:07.146 "data_offset": 2048, 00:17:07.146 "data_size": 63488 00:17:07.146 } 00:17:07.146 ] 00:17:07.146 }' 00:17:07.146 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.146 10:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.755 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.755 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:08.014 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:08.014 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:08.273 [2024-07-25 10:32:11.882231] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.273 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:08.273 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:08.273 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:08.273 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:08.273 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:08.273 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:08.273 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.273 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.273 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.273 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.273 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.273 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.531 10:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.531 "name": "Existed_Raid", 00:17:08.531 "uuid": "fb403512-7387-4700-bc6e-04ae39c17c1b", 00:17:08.531 "strip_size_kb": 64, 00:17:08.531 "state": "configuring", 00:17:08.531 "raid_level": "raid0", 00:17:08.531 "superblock": true, 00:17:08.531 "num_base_bdevs": 4, 00:17:08.531 "num_base_bdevs_discovered": 2, 00:17:08.531 "num_base_bdevs_operational": 4, 00:17:08.531 "base_bdevs_list": [ 00:17:08.531 { 00:17:08.531 "name": null, 00:17:08.531 "uuid": "2915614a-0674-4571-9ba5-3573badaa46d", 00:17:08.531 "is_configured": false, 00:17:08.531 "data_offset": 2048, 00:17:08.531 "data_size": 63488 00:17:08.531 }, 00:17:08.531 { 00:17:08.531 "name": null, 00:17:08.531 "uuid": "8971da79-5996-4a19-a378-72e9617f2a2a", 00:17:08.531 "is_configured": false, 00:17:08.531 "data_offset": 2048, 00:17:08.531 "data_size": 63488 00:17:08.531 }, 00:17:08.531 { 00:17:08.531 "name": "BaseBdev3", 00:17:08.531 "uuid": "43eb8b20-69be-49a4-99d9-887d1def2d57", 00:17:08.531 "is_configured": true, 00:17:08.531 "data_offset": 2048, 00:17:08.531 "data_size": 63488 00:17:08.531 }, 00:17:08.531 { 00:17:08.531 "name": "BaseBdev4", 00:17:08.532 "uuid": "13cc7eda-cbc2-44e2-b04c-0329fe570d09", 00:17:08.532 "is_configured": true, 00:17:08.532 "data_offset": 2048, 00:17:08.532 "data_size": 63488 00:17:08.532 } 00:17:08.532 ] 00:17:08.532 }' 00:17:08.532 10:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.532 10:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.096 10:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.096 10:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:09.353 10:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:09.353 10:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:09.610 [2024-07-25 10:32:13.183794] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.610 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:09.610 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:09.610 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.610 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:09.610 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:09.610 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:09.610 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.610 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.610 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.610 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.610 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.610 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.867 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.867 "name": "Existed_Raid", 00:17:09.867 "uuid": "fb403512-7387-4700-bc6e-04ae39c17c1b", 00:17:09.867 "strip_size_kb": 64, 00:17:09.867 "state": "configuring", 00:17:09.867 "raid_level": "raid0", 00:17:09.867 "superblock": true, 00:17:09.867 "num_base_bdevs": 4, 00:17:09.867 "num_base_bdevs_discovered": 3, 00:17:09.867 "num_base_bdevs_operational": 4, 00:17:09.867 "base_bdevs_list": [ 00:17:09.867 { 00:17:09.867 "name": null, 00:17:09.867 "uuid": "2915614a-0674-4571-9ba5-3573badaa46d", 00:17:09.867 "is_configured": false, 00:17:09.867 "data_offset": 2048, 00:17:09.867 "data_size": 63488 00:17:09.867 }, 00:17:09.867 { 00:17:09.867 "name": "BaseBdev2", 00:17:09.867 "uuid": "8971da79-5996-4a19-a378-72e9617f2a2a", 00:17:09.867 "is_configured": true, 00:17:09.867 "data_offset": 2048, 00:17:09.867 "data_size": 63488 00:17:09.867 }, 00:17:09.867 { 00:17:09.867 "name": "BaseBdev3", 00:17:09.867 "uuid": "43eb8b20-69be-49a4-99d9-887d1def2d57", 00:17:09.867 "is_configured": true, 00:17:09.867 "data_offset": 2048, 00:17:09.867 "data_size": 63488 00:17:09.867 }, 00:17:09.867 { 00:17:09.867 "name": "BaseBdev4", 00:17:09.867 "uuid": "13cc7eda-cbc2-44e2-b04c-0329fe570d09", 00:17:09.867 "is_configured": true, 00:17:09.867 "data_offset": 2048, 00:17:09.867 "data_size": 63488 00:17:09.867 } 00:17:09.867 ] 00:17:09.867 }' 00:17:09.867 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.867 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.430 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.430 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:10.687 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:10.687 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.687 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:10.945 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2915614a-0674-4571-9ba5-3573badaa46d 00:17:11.202 [2024-07-25 10:32:14.762635] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:11.202 [2024-07-25 10:32:14.762888] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x10de8c0 00:17:11.202 [2024-07-25 10:32:14.762907] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:11.202 [2024-07-25 10:32:14.763092] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdd95e0 00:17:11.202 [2024-07-25 10:32:14.763257] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10de8c0 00:17:11.202 [2024-07-25 10:32:14.763274] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10de8c0 00:17:11.202 [2024-07-25 10:32:14.763391] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.202 NewBaseBdev 00:17:11.202 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:11.202 10:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:11.202 10:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:11.202 10:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:11.202 10:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:11.202 10:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:11.202 10:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.460 10:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:11.718 [ 00:17:11.718 { 00:17:11.718 "name": "NewBaseBdev", 00:17:11.718 "aliases": [ 00:17:11.718 "2915614a-0674-4571-9ba5-3573badaa46d" 00:17:11.718 ], 00:17:11.718 "product_name": "Malloc disk", 00:17:11.718 "block_size": 512, 00:17:11.718 "num_blocks": 65536, 00:17:11.718 "uuid": "2915614a-0674-4571-9ba5-3573badaa46d", 00:17:11.718 "assigned_rate_limits": { 00:17:11.718 "rw_ios_per_sec": 0, 00:17:11.718 "rw_mbytes_per_sec": 0, 00:17:11.718 "r_mbytes_per_sec": 0, 00:17:11.718 "w_mbytes_per_sec": 0 00:17:11.718 }, 00:17:11.718 "claimed": true, 00:17:11.718 "claim_type": "exclusive_write", 00:17:11.718 "zoned": false, 00:17:11.718 "supported_io_types": { 00:17:11.718 "read": true, 00:17:11.718 "write": true, 00:17:11.718 "unmap": true, 00:17:11.718 "flush": true, 00:17:11.718 "reset": true, 00:17:11.718 "nvme_admin": false, 00:17:11.718 "nvme_io": false, 00:17:11.719 "nvme_io_md": false, 00:17:11.719 "write_zeroes": true, 00:17:11.719 "zcopy": true, 00:17:11.719 "get_zone_info": false, 00:17:11.719 "zone_management": false, 00:17:11.719 "zone_append": false, 00:17:11.719 "compare": false, 00:17:11.719 "compare_and_write": false, 00:17:11.719 "abort": true, 00:17:11.719 "seek_hole": false, 00:17:11.719 "seek_data": false, 00:17:11.719 "copy": true, 00:17:11.719 "nvme_iov_md": false 00:17:11.719 }, 00:17:11.719 "memory_domains": [ 00:17:11.719 { 00:17:11.719 "dma_device_id": "system", 00:17:11.719 "dma_device_type": 1 00:17:11.719 }, 00:17:11.719 { 00:17:11.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.719 "dma_device_type": 2 00:17:11.719 } 00:17:11.719 ], 00:17:11.719 "driver_specific": {} 00:17:11.719 } 00:17:11.719 ] 00:17:11.719 10:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:11.719 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:11.719 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:11.719 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:11.719 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:11.719 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.719 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:11.719 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.719 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.719 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.719 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.719 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.719 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.977 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.977 "name": "Existed_Raid", 00:17:11.977 "uuid": "fb403512-7387-4700-bc6e-04ae39c17c1b", 00:17:11.977 "strip_size_kb": 64, 00:17:11.977 "state": "online", 00:17:11.977 "raid_level": "raid0", 00:17:11.977 "superblock": true, 00:17:11.977 "num_base_bdevs": 4, 00:17:11.977 "num_base_bdevs_discovered": 4, 00:17:11.977 "num_base_bdevs_operational": 4, 00:17:11.977 "base_bdevs_list": [ 00:17:11.977 { 00:17:11.977 "name": "NewBaseBdev", 00:17:11.977 "uuid": "2915614a-0674-4571-9ba5-3573badaa46d", 00:17:11.977 "is_configured": true, 00:17:11.977 "data_offset": 2048, 00:17:11.977 "data_size": 63488 00:17:11.977 }, 00:17:11.977 { 00:17:11.977 "name": "BaseBdev2", 00:17:11.977 "uuid": "8971da79-5996-4a19-a378-72e9617f2a2a", 00:17:11.977 "is_configured": true, 00:17:11.977 "data_offset": 2048, 00:17:11.977 "data_size": 63488 00:17:11.977 }, 00:17:11.977 { 00:17:11.977 "name": "BaseBdev3", 00:17:11.977 "uuid": "43eb8b20-69be-49a4-99d9-887d1def2d57", 00:17:11.977 "is_configured": true, 00:17:11.977 "data_offset": 2048, 00:17:11.977 "data_size": 63488 00:17:11.977 }, 00:17:11.977 { 00:17:11.977 "name": "BaseBdev4", 00:17:11.977 "uuid": "13cc7eda-cbc2-44e2-b04c-0329fe570d09", 00:17:11.977 "is_configured": true, 00:17:11.977 "data_offset": 2048, 00:17:11.977 "data_size": 63488 00:17:11.977 } 00:17:11.977 ] 00:17:11.977 }' 00:17:11.977 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.977 10:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.542 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:12.542 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:12.542 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:12.542 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:12.542 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:12.542 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:12.542 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:12.542 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:12.799 [2024-07-25 10:32:16.435407] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.799 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:12.799 "name": "Existed_Raid", 00:17:12.799 "aliases": [ 00:17:12.799 "fb403512-7387-4700-bc6e-04ae39c17c1b" 00:17:12.799 ], 00:17:12.799 "product_name": "Raid Volume", 00:17:12.799 "block_size": 512, 00:17:12.799 "num_blocks": 253952, 00:17:12.799 "uuid": "fb403512-7387-4700-bc6e-04ae39c17c1b", 00:17:12.799 "assigned_rate_limits": { 00:17:12.799 "rw_ios_per_sec": 0, 00:17:12.799 "rw_mbytes_per_sec": 0, 00:17:12.799 "r_mbytes_per_sec": 0, 00:17:12.799 "w_mbytes_per_sec": 0 00:17:12.799 }, 00:17:12.799 "claimed": false, 00:17:12.799 "zoned": false, 00:17:12.799 "supported_io_types": { 00:17:12.799 "read": true, 00:17:12.799 "write": true, 00:17:12.799 "unmap": true, 00:17:12.799 "flush": true, 00:17:12.799 "reset": true, 00:17:12.799 "nvme_admin": false, 00:17:12.799 "nvme_io": false, 00:17:12.799 "nvme_io_md": false, 00:17:12.799 "write_zeroes": true, 00:17:12.799 "zcopy": false, 00:17:12.799 "get_zone_info": false, 00:17:12.799 "zone_management": false, 00:17:12.799 "zone_append": false, 00:17:12.799 "compare": false, 00:17:12.799 "compare_and_write": false, 00:17:12.799 "abort": false, 00:17:12.799 "seek_hole": false, 00:17:12.799 "seek_data": false, 00:17:12.799 "copy": false, 00:17:12.799 "nvme_iov_md": false 00:17:12.799 }, 00:17:12.799 "memory_domains": [ 00:17:12.799 { 00:17:12.799 "dma_device_id": "system", 00:17:12.799 "dma_device_type": 1 00:17:12.799 }, 00:17:12.799 { 00:17:12.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.799 "dma_device_type": 2 00:17:12.799 }, 00:17:12.799 { 00:17:12.799 "dma_device_id": "system", 00:17:12.799 "dma_device_type": 1 00:17:12.799 }, 00:17:12.799 { 00:17:12.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.799 "dma_device_type": 2 00:17:12.799 }, 00:17:12.799 { 00:17:12.800 "dma_device_id": "system", 00:17:12.800 "dma_device_type": 1 00:17:12.800 }, 00:17:12.800 { 00:17:12.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.800 "dma_device_type": 2 00:17:12.800 }, 00:17:12.800 { 00:17:12.800 "dma_device_id": "system", 00:17:12.800 "dma_device_type": 1 00:17:12.800 }, 00:17:12.800 { 00:17:12.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.800 "dma_device_type": 2 00:17:12.800 } 00:17:12.800 ], 00:17:12.800 "driver_specific": { 00:17:12.800 "raid": { 00:17:12.800 "uuid": "fb403512-7387-4700-bc6e-04ae39c17c1b", 00:17:12.800 "strip_size_kb": 64, 00:17:12.800 "state": "online", 00:17:12.800 "raid_level": "raid0", 00:17:12.800 "superblock": true, 00:17:12.800 "num_base_bdevs": 4, 00:17:12.800 "num_base_bdevs_discovered": 4, 00:17:12.800 "num_base_bdevs_operational": 4, 00:17:12.800 "base_bdevs_list": [ 00:17:12.800 { 00:17:12.800 "name": "NewBaseBdev", 00:17:12.800 "uuid": "2915614a-0674-4571-9ba5-3573badaa46d", 00:17:12.800 "is_configured": true, 00:17:12.800 "data_offset": 2048, 00:17:12.800 "data_size": 63488 00:17:12.800 }, 00:17:12.800 { 00:17:12.800 "name": "BaseBdev2", 00:17:12.800 "uuid": "8971da79-5996-4a19-a378-72e9617f2a2a", 00:17:12.800 "is_configured": true, 00:17:12.800 "data_offset": 2048, 00:17:12.800 "data_size": 63488 00:17:12.800 }, 00:17:12.800 { 00:17:12.800 "name": "BaseBdev3", 00:17:12.800 "uuid": "43eb8b20-69be-49a4-99d9-887d1def2d57", 00:17:12.800 "is_configured": true, 00:17:12.800 "data_offset": 2048, 00:17:12.800 "data_size": 63488 00:17:12.800 }, 00:17:12.800 { 00:17:12.800 "name": "BaseBdev4", 00:17:12.800 "uuid": "13cc7eda-cbc2-44e2-b04c-0329fe570d09", 00:17:12.800 "is_configured": true, 00:17:12.800 "data_offset": 2048, 00:17:12.800 "data_size": 63488 00:17:12.800 } 00:17:12.800 ] 00:17:12.800 } 00:17:12.800 } 00:17:12.800 }' 00:17:12.800 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:12.800 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:12.800 BaseBdev2 00:17:12.800 BaseBdev3 00:17:12.800 BaseBdev4' 00:17:12.800 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.800 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:12.800 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:13.365 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:13.365 "name": "NewBaseBdev", 00:17:13.365 "aliases": [ 00:17:13.365 "2915614a-0674-4571-9ba5-3573badaa46d" 00:17:13.365 ], 00:17:13.365 "product_name": "Malloc disk", 00:17:13.365 "block_size": 512, 00:17:13.365 "num_blocks": 65536, 00:17:13.365 "uuid": "2915614a-0674-4571-9ba5-3573badaa46d", 00:17:13.365 "assigned_rate_limits": { 00:17:13.365 "rw_ios_per_sec": 0, 00:17:13.365 "rw_mbytes_per_sec": 0, 00:17:13.365 "r_mbytes_per_sec": 0, 00:17:13.365 "w_mbytes_per_sec": 0 00:17:13.365 }, 00:17:13.365 "claimed": true, 00:17:13.365 "claim_type": "exclusive_write", 00:17:13.365 "zoned": false, 00:17:13.365 "supported_io_types": { 00:17:13.365 "read": true, 00:17:13.365 "write": true, 00:17:13.365 "unmap": true, 00:17:13.365 "flush": true, 00:17:13.365 "reset": true, 00:17:13.365 "nvme_admin": false, 00:17:13.365 "nvme_io": false, 00:17:13.365 "nvme_io_md": false, 00:17:13.365 "write_zeroes": true, 00:17:13.365 "zcopy": true, 00:17:13.365 "get_zone_info": false, 00:17:13.365 "zone_management": false, 00:17:13.365 "zone_append": false, 00:17:13.365 "compare": false, 00:17:13.365 "compare_and_write": false, 00:17:13.365 "abort": true, 00:17:13.365 "seek_hole": false, 00:17:13.365 "seek_data": false, 00:17:13.365 "copy": true, 00:17:13.365 "nvme_iov_md": false 00:17:13.365 }, 00:17:13.365 "memory_domains": [ 00:17:13.365 { 00:17:13.365 "dma_device_id": "system", 00:17:13.365 "dma_device_type": 1 00:17:13.365 }, 00:17:13.365 { 00:17:13.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.365 "dma_device_type": 2 00:17:13.365 } 00:17:13.365 ], 00:17:13.365 "driver_specific": {} 00:17:13.365 }' 00:17:13.365 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.365 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.365 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.365 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.365 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.365 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.365 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.365 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.365 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.365 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.365 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.622 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:13.622 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:13.622 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:13.622 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:13.879 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:13.879 "name": "BaseBdev2", 00:17:13.879 "aliases": [ 00:17:13.879 "8971da79-5996-4a19-a378-72e9617f2a2a" 00:17:13.879 ], 00:17:13.879 "product_name": "Malloc disk", 00:17:13.879 "block_size": 512, 00:17:13.879 "num_blocks": 65536, 00:17:13.879 "uuid": "8971da79-5996-4a19-a378-72e9617f2a2a", 00:17:13.879 "assigned_rate_limits": { 00:17:13.879 "rw_ios_per_sec": 0, 00:17:13.879 "rw_mbytes_per_sec": 0, 00:17:13.879 "r_mbytes_per_sec": 0, 00:17:13.879 "w_mbytes_per_sec": 0 00:17:13.879 }, 00:17:13.879 "claimed": true, 00:17:13.879 "claim_type": "exclusive_write", 00:17:13.879 "zoned": false, 00:17:13.879 "supported_io_types": { 00:17:13.879 "read": true, 00:17:13.879 "write": true, 00:17:13.879 "unmap": true, 00:17:13.879 "flush": true, 00:17:13.879 "reset": true, 00:17:13.879 "nvme_admin": false, 00:17:13.879 "nvme_io": false, 00:17:13.879 "nvme_io_md": false, 00:17:13.879 "write_zeroes": true, 00:17:13.879 "zcopy": true, 00:17:13.879 "get_zone_info": false, 00:17:13.879 "zone_management": false, 00:17:13.879 "zone_append": false, 00:17:13.879 "compare": false, 00:17:13.879 "compare_and_write": false, 00:17:13.879 "abort": true, 00:17:13.879 "seek_hole": false, 00:17:13.879 "seek_data": false, 00:17:13.879 "copy": true, 00:17:13.879 "nvme_iov_md": false 00:17:13.879 }, 00:17:13.879 "memory_domains": [ 00:17:13.879 { 00:17:13.879 "dma_device_id": "system", 00:17:13.879 "dma_device_type": 1 00:17:13.879 }, 00:17:13.879 { 00:17:13.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.879 "dma_device_type": 2 00:17:13.879 } 00:17:13.879 ], 00:17:13.879 "driver_specific": {} 00:17:13.879 }' 00:17:13.879 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.879 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.879 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.879 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.879 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.879 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.879 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.879 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.879 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.879 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.138 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.138 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:14.138 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:14.138 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:14.138 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:14.396 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:14.396 "name": "BaseBdev3", 00:17:14.396 "aliases": [ 00:17:14.396 "43eb8b20-69be-49a4-99d9-887d1def2d57" 00:17:14.396 ], 00:17:14.396 "product_name": "Malloc disk", 00:17:14.396 "block_size": 512, 00:17:14.396 "num_blocks": 65536, 00:17:14.396 "uuid": "43eb8b20-69be-49a4-99d9-887d1def2d57", 00:17:14.396 "assigned_rate_limits": { 00:17:14.396 "rw_ios_per_sec": 0, 00:17:14.396 "rw_mbytes_per_sec": 0, 00:17:14.396 "r_mbytes_per_sec": 0, 00:17:14.396 "w_mbytes_per_sec": 0 00:17:14.396 }, 00:17:14.396 "claimed": true, 00:17:14.396 "claim_type": "exclusive_write", 00:17:14.396 "zoned": false, 00:17:14.396 "supported_io_types": { 00:17:14.396 "read": true, 00:17:14.396 "write": true, 00:17:14.396 "unmap": true, 00:17:14.396 "flush": true, 00:17:14.396 "reset": true, 00:17:14.396 "nvme_admin": false, 00:17:14.396 "nvme_io": false, 00:17:14.396 "nvme_io_md": false, 00:17:14.396 "write_zeroes": true, 00:17:14.396 "zcopy": true, 00:17:14.396 "get_zone_info": false, 00:17:14.396 "zone_management": false, 00:17:14.396 "zone_append": false, 00:17:14.396 "compare": false, 00:17:14.396 "compare_and_write": false, 00:17:14.396 "abort": true, 00:17:14.396 "seek_hole": false, 00:17:14.396 "seek_data": false, 00:17:14.396 "copy": true, 00:17:14.396 "nvme_iov_md": false 00:17:14.396 }, 00:17:14.396 "memory_domains": [ 00:17:14.396 { 00:17:14.396 "dma_device_id": "system", 00:17:14.396 "dma_device_type": 1 00:17:14.396 }, 00:17:14.396 { 00:17:14.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.396 "dma_device_type": 2 00:17:14.396 } 00:17:14.396 ], 00:17:14.396 "driver_specific": {} 00:17:14.396 }' 00:17:14.396 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.396 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.396 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:14.396 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.396 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.396 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:14.396 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.396 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.654 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:14.654 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.654 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.654 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:14.654 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:14.654 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:14.654 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:14.912 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:14.912 "name": "BaseBdev4", 00:17:14.912 "aliases": [ 00:17:14.912 "13cc7eda-cbc2-44e2-b04c-0329fe570d09" 00:17:14.912 ], 00:17:14.912 "product_name": "Malloc disk", 00:17:14.912 "block_size": 512, 00:17:14.912 "num_blocks": 65536, 00:17:14.912 "uuid": "13cc7eda-cbc2-44e2-b04c-0329fe570d09", 00:17:14.912 "assigned_rate_limits": { 00:17:14.912 "rw_ios_per_sec": 0, 00:17:14.912 "rw_mbytes_per_sec": 0, 00:17:14.912 "r_mbytes_per_sec": 0, 00:17:14.912 "w_mbytes_per_sec": 0 00:17:14.912 }, 00:17:14.912 "claimed": true, 00:17:14.912 "claim_type": "exclusive_write", 00:17:14.912 "zoned": false, 00:17:14.912 "supported_io_types": { 00:17:14.912 "read": true, 00:17:14.912 "write": true, 00:17:14.912 "unmap": true, 00:17:14.912 "flush": true, 00:17:14.912 "reset": true, 00:17:14.912 "nvme_admin": false, 00:17:14.912 "nvme_io": false, 00:17:14.912 "nvme_io_md": false, 00:17:14.912 "write_zeroes": true, 00:17:14.912 "zcopy": true, 00:17:14.912 "get_zone_info": false, 00:17:14.912 "zone_management": false, 00:17:14.912 "zone_append": false, 00:17:14.912 "compare": false, 00:17:14.912 "compare_and_write": false, 00:17:14.912 "abort": true, 00:17:14.912 "seek_hole": false, 00:17:14.912 "seek_data": false, 00:17:14.912 "copy": true, 00:17:14.912 "nvme_iov_md": false 00:17:14.912 }, 00:17:14.912 "memory_domains": [ 00:17:14.912 { 00:17:14.912 "dma_device_id": "system", 00:17:14.912 "dma_device_type": 1 00:17:14.912 }, 00:17:14.912 { 00:17:14.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.912 "dma_device_type": 2 00:17:14.912 } 00:17:14.912 ], 00:17:14.912 "driver_specific": {} 00:17:14.912 }' 00:17:14.912 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.912 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.912 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:14.912 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.912 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.912 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:14.912 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.169 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.169 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:15.170 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.170 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.170 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:15.170 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:15.428 [2024-07-25 10:32:18.973866] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.428 [2024-07-25 10:32:18.973900] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.428 [2024-07-25 10:32:18.973981] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.428 [2024-07-25 10:32:18.974057] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.428 [2024-07-25 10:32:18.974072] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10de8c0 name Existed_Raid, state offline 00:17:15.428 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2392930 00:17:15.428 10:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2392930 ']' 00:17:15.428 10:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2392930 00:17:15.428 10:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:15.428 10:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:15.428 10:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2392930 00:17:15.428 10:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:15.428 10:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:15.428 10:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2392930' 00:17:15.428 killing process with pid 2392930 00:17:15.428 10:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2392930 00:17:15.428 [2024-07-25 10:32:19.022699] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:15.428 10:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2392930 00:17:15.428 [2024-07-25 10:32:19.074188] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:15.686 10:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:15.686 00:17:15.686 real 0m32.471s 00:17:15.686 user 1m0.656s 00:17:15.686 sys 0m4.352s 00:17:15.686 10:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:15.686 10:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.686 ************************************ 00:17:15.686 END TEST raid_state_function_test_sb 00:17:15.686 ************************************ 00:17:15.686 10:32:19 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:15.686 10:32:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:15.686 10:32:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:15.686 10:32:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.945 ************************************ 00:17:15.945 START TEST raid_superblock_test 00:17:15.945 ************************************ 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2397394 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2397394 /var/tmp/spdk-raid.sock 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2397394 ']' 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:15.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:15.945 10:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.945 [2024-07-25 10:32:19.466547] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:17:15.945 [2024-07-25 10:32:19.466631] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397394 ] 00:17:15.945 [2024-07-25 10:32:19.544880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.203 [2024-07-25 10:32:19.658134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.203 [2024-07-25 10:32:19.732628] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.203 [2024-07-25 10:32:19.732675] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.769 10:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:16.769 10:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:16.769 10:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:16.769 10:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:16.769 10:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:16.769 10:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:16.769 10:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:16.769 10:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:16.769 10:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:16.769 10:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:16.769 10:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:17.027 malloc1 00:17:17.285 10:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:17.543 [2024-07-25 10:32:21.014335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:17.543 [2024-07-25 10:32:21.014396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.543 [2024-07-25 10:32:21.014425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10682b0 00:17:17.543 [2024-07-25 10:32:21.014441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.543 [2024-07-25 10:32:21.016242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.543 [2024-07-25 10:32:21.016271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:17.543 pt1 00:17:17.543 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:17.543 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:17.543 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:17.543 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:17.543 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:17.543 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:17.543 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:17.543 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:17.543 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:17.802 malloc2 00:17:17.802 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:18.060 [2024-07-25 10:32:21.539726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:18.060 [2024-07-25 10:32:21.539805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.060 [2024-07-25 10:32:21.539829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x121b1e0 00:17:18.060 [2024-07-25 10:32:21.539843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.060 [2024-07-25 10:32:21.541597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.060 [2024-07-25 10:32:21.541621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:18.060 pt2 00:17:18.060 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:18.060 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:18.060 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:17:18.060 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:17:18.060 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:18.060 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.060 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.060 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.060 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:18.318 malloc3 00:17:18.318 10:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:18.577 [2024-07-25 10:32:22.131841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:18.577 [2024-07-25 10:32:22.131901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.577 [2024-07-25 10:32:22.131924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12014d0 00:17:18.577 [2024-07-25 10:32:22.131939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.577 [2024-07-25 10:32:22.133464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.577 [2024-07-25 10:32:22.133492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:18.577 pt3 00:17:18.577 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:18.577 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:18.577 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:17:18.577 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:17:18.577 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:18.577 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.577 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.577 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.577 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:18.835 malloc4 00:17:18.835 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:19.093 [2024-07-25 10:32:22.680518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:19.093 [2024-07-25 10:32:22.680587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.093 [2024-07-25 10:32:22.680612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105fe30 00:17:19.093 [2024-07-25 10:32:22.680627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.093 [2024-07-25 10:32:22.682437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.093 [2024-07-25 10:32:22.682466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:19.093 pt4 00:17:19.093 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:19.093 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:19.093 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:19.351 [2024-07-25 10:32:22.969318] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.351 [2024-07-25 10:32:22.970743] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.351 [2024-07-25 10:32:22.970810] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:19.351 [2024-07-25 10:32:22.970868] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:19.351 [2024-07-25 10:32:22.971077] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1060980 00:17:19.351 [2024-07-25 10:32:22.971095] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:19.351 [2024-07-25 10:32:22.971329] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x107f1a0 00:17:19.351 [2024-07-25 10:32:22.971521] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1060980 00:17:19.351 [2024-07-25 10:32:22.971537] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1060980 00:17:19.351 [2024-07-25 10:32:22.971668] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.351 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:19.351 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:19.351 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:19.351 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:19.351 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:19.351 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:19.351 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.351 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.351 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.351 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.352 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.352 10:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.609 10:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.609 "name": "raid_bdev1", 00:17:19.609 "uuid": "31b43814-dabd-45be-9077-894593244e55", 00:17:19.609 "strip_size_kb": 64, 00:17:19.609 "state": "online", 00:17:19.609 "raid_level": "raid0", 00:17:19.609 "superblock": true, 00:17:19.609 "num_base_bdevs": 4, 00:17:19.609 "num_base_bdevs_discovered": 4, 00:17:19.609 "num_base_bdevs_operational": 4, 00:17:19.609 "base_bdevs_list": [ 00:17:19.609 { 00:17:19.609 "name": "pt1", 00:17:19.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.609 "is_configured": true, 00:17:19.609 "data_offset": 2048, 00:17:19.609 "data_size": 63488 00:17:19.609 }, 00:17:19.609 { 00:17:19.609 "name": "pt2", 00:17:19.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.609 "is_configured": true, 00:17:19.609 "data_offset": 2048, 00:17:19.609 "data_size": 63488 00:17:19.609 }, 00:17:19.609 { 00:17:19.609 "name": "pt3", 00:17:19.609 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.609 "is_configured": true, 00:17:19.609 "data_offset": 2048, 00:17:19.609 "data_size": 63488 00:17:19.609 }, 00:17:19.609 { 00:17:19.609 "name": "pt4", 00:17:19.609 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:19.609 "is_configured": true, 00:17:19.609 "data_offset": 2048, 00:17:19.609 "data_size": 63488 00:17:19.609 } 00:17:19.609 ] 00:17:19.609 }' 00:17:19.609 10:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.609 10:32:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.179 10:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:20.179 10:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:20.179 10:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:20.179 10:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:20.179 10:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:20.179 10:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:20.179 10:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:20.179 10:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:20.436 [2024-07-25 10:32:24.104579] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.436 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:20.436 "name": "raid_bdev1", 00:17:20.436 "aliases": [ 00:17:20.436 "31b43814-dabd-45be-9077-894593244e55" 00:17:20.436 ], 00:17:20.436 "product_name": "Raid Volume", 00:17:20.436 "block_size": 512, 00:17:20.436 "num_blocks": 253952, 00:17:20.436 "uuid": "31b43814-dabd-45be-9077-894593244e55", 00:17:20.436 "assigned_rate_limits": { 00:17:20.436 "rw_ios_per_sec": 0, 00:17:20.436 "rw_mbytes_per_sec": 0, 00:17:20.436 "r_mbytes_per_sec": 0, 00:17:20.436 "w_mbytes_per_sec": 0 00:17:20.436 }, 00:17:20.436 "claimed": false, 00:17:20.436 "zoned": false, 00:17:20.436 "supported_io_types": { 00:17:20.436 "read": true, 00:17:20.436 "write": true, 00:17:20.436 "unmap": true, 00:17:20.436 "flush": true, 00:17:20.436 "reset": true, 00:17:20.437 "nvme_admin": false, 00:17:20.437 "nvme_io": false, 00:17:20.437 "nvme_io_md": false, 00:17:20.437 "write_zeroes": true, 00:17:20.437 "zcopy": false, 00:17:20.437 "get_zone_info": false, 00:17:20.437 "zone_management": false, 00:17:20.437 "zone_append": false, 00:17:20.437 "compare": false, 00:17:20.437 "compare_and_write": false, 00:17:20.437 "abort": false, 00:17:20.437 "seek_hole": false, 00:17:20.437 "seek_data": false, 00:17:20.437 "copy": false, 00:17:20.437 "nvme_iov_md": false 00:17:20.437 }, 00:17:20.437 "memory_domains": [ 00:17:20.437 { 00:17:20.437 "dma_device_id": "system", 00:17:20.437 "dma_device_type": 1 00:17:20.437 }, 00:17:20.437 { 00:17:20.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.437 "dma_device_type": 2 00:17:20.437 }, 00:17:20.437 { 00:17:20.437 "dma_device_id": "system", 00:17:20.437 "dma_device_type": 1 00:17:20.437 }, 00:17:20.437 { 00:17:20.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.437 "dma_device_type": 2 00:17:20.437 }, 00:17:20.437 { 00:17:20.437 "dma_device_id": "system", 00:17:20.437 "dma_device_type": 1 00:17:20.437 }, 00:17:20.437 { 00:17:20.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.437 "dma_device_type": 2 00:17:20.437 }, 00:17:20.437 { 00:17:20.437 "dma_device_id": "system", 00:17:20.437 "dma_device_type": 1 00:17:20.437 }, 00:17:20.437 { 00:17:20.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.437 "dma_device_type": 2 00:17:20.437 } 00:17:20.437 ], 00:17:20.437 "driver_specific": { 00:17:20.437 "raid": { 00:17:20.437 "uuid": "31b43814-dabd-45be-9077-894593244e55", 00:17:20.437 "strip_size_kb": 64, 00:17:20.437 "state": "online", 00:17:20.437 "raid_level": "raid0", 00:17:20.437 "superblock": true, 00:17:20.437 "num_base_bdevs": 4, 00:17:20.437 "num_base_bdevs_discovered": 4, 00:17:20.437 "num_base_bdevs_operational": 4, 00:17:20.437 "base_bdevs_list": [ 00:17:20.437 { 00:17:20.437 "name": "pt1", 00:17:20.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.437 "is_configured": true, 00:17:20.437 "data_offset": 2048, 00:17:20.437 "data_size": 63488 00:17:20.437 }, 00:17:20.437 { 00:17:20.437 "name": "pt2", 00:17:20.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.437 "is_configured": true, 00:17:20.437 "data_offset": 2048, 00:17:20.437 "data_size": 63488 00:17:20.437 }, 00:17:20.437 { 00:17:20.437 "name": "pt3", 00:17:20.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:20.437 "is_configured": true, 00:17:20.437 "data_offset": 2048, 00:17:20.437 "data_size": 63488 00:17:20.437 }, 00:17:20.437 { 00:17:20.437 "name": "pt4", 00:17:20.437 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:20.437 "is_configured": true, 00:17:20.437 "data_offset": 2048, 00:17:20.437 "data_size": 63488 00:17:20.437 } 00:17:20.437 ] 00:17:20.437 } 00:17:20.437 } 00:17:20.437 }' 00:17:20.437 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:20.694 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:20.694 pt2 00:17:20.694 pt3 00:17:20.694 pt4' 00:17:20.694 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:20.694 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:20.694 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:20.952 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:20.952 "name": "pt1", 00:17:20.952 "aliases": [ 00:17:20.952 "00000000-0000-0000-0000-000000000001" 00:17:20.952 ], 00:17:20.952 "product_name": "passthru", 00:17:20.952 "block_size": 512, 00:17:20.952 "num_blocks": 65536, 00:17:20.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.952 "assigned_rate_limits": { 00:17:20.952 "rw_ios_per_sec": 0, 00:17:20.952 "rw_mbytes_per_sec": 0, 00:17:20.952 "r_mbytes_per_sec": 0, 00:17:20.952 "w_mbytes_per_sec": 0 00:17:20.952 }, 00:17:20.952 "claimed": true, 00:17:20.952 "claim_type": "exclusive_write", 00:17:20.952 "zoned": false, 00:17:20.952 "supported_io_types": { 00:17:20.952 "read": true, 00:17:20.952 "write": true, 00:17:20.952 "unmap": true, 00:17:20.952 "flush": true, 00:17:20.952 "reset": true, 00:17:20.952 "nvme_admin": false, 00:17:20.952 "nvme_io": false, 00:17:20.952 "nvme_io_md": false, 00:17:20.952 "write_zeroes": true, 00:17:20.952 "zcopy": true, 00:17:20.952 "get_zone_info": false, 00:17:20.952 "zone_management": false, 00:17:20.952 "zone_append": false, 00:17:20.952 "compare": false, 00:17:20.952 "compare_and_write": false, 00:17:20.952 "abort": true, 00:17:20.952 "seek_hole": false, 00:17:20.952 "seek_data": false, 00:17:20.952 "copy": true, 00:17:20.952 "nvme_iov_md": false 00:17:20.952 }, 00:17:20.952 "memory_domains": [ 00:17:20.952 { 00:17:20.952 "dma_device_id": "system", 00:17:20.952 "dma_device_type": 1 00:17:20.952 }, 00:17:20.952 { 00:17:20.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.952 "dma_device_type": 2 00:17:20.952 } 00:17:20.952 ], 00:17:20.952 "driver_specific": { 00:17:20.952 "passthru": { 00:17:20.952 "name": "pt1", 00:17:20.952 "base_bdev_name": "malloc1" 00:17:20.952 } 00:17:20.952 } 00:17:20.952 }' 00:17:20.952 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.952 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.952 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:20.952 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.952 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.952 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:20.952 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.952 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.210 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:21.210 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.210 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.210 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:21.210 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:21.210 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:21.210 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:21.468 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.468 "name": "pt2", 00:17:21.468 "aliases": [ 00:17:21.468 "00000000-0000-0000-0000-000000000002" 00:17:21.468 ], 00:17:21.468 "product_name": "passthru", 00:17:21.468 "block_size": 512, 00:17:21.468 "num_blocks": 65536, 00:17:21.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.468 "assigned_rate_limits": { 00:17:21.468 "rw_ios_per_sec": 0, 00:17:21.468 "rw_mbytes_per_sec": 0, 00:17:21.468 "r_mbytes_per_sec": 0, 00:17:21.468 "w_mbytes_per_sec": 0 00:17:21.468 }, 00:17:21.468 "claimed": true, 00:17:21.468 "claim_type": "exclusive_write", 00:17:21.468 "zoned": false, 00:17:21.468 "supported_io_types": { 00:17:21.468 "read": true, 00:17:21.468 "write": true, 00:17:21.468 "unmap": true, 00:17:21.468 "flush": true, 00:17:21.468 "reset": true, 00:17:21.468 "nvme_admin": false, 00:17:21.468 "nvme_io": false, 00:17:21.468 "nvme_io_md": false, 00:17:21.468 "write_zeroes": true, 00:17:21.468 "zcopy": true, 00:17:21.468 "get_zone_info": false, 00:17:21.468 "zone_management": false, 00:17:21.468 "zone_append": false, 00:17:21.468 "compare": false, 00:17:21.468 "compare_and_write": false, 00:17:21.468 "abort": true, 00:17:21.468 "seek_hole": false, 00:17:21.468 "seek_data": false, 00:17:21.468 "copy": true, 00:17:21.468 "nvme_iov_md": false 00:17:21.468 }, 00:17:21.468 "memory_domains": [ 00:17:21.468 { 00:17:21.468 "dma_device_id": "system", 00:17:21.468 "dma_device_type": 1 00:17:21.468 }, 00:17:21.468 { 00:17:21.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.468 "dma_device_type": 2 00:17:21.468 } 00:17:21.468 ], 00:17:21.468 "driver_specific": { 00:17:21.468 "passthru": { 00:17:21.468 "name": "pt2", 00:17:21.468 "base_bdev_name": "malloc2" 00:17:21.468 } 00:17:21.468 } 00:17:21.468 }' 00:17:21.468 10:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.468 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.468 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:21.468 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.468 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.468 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:21.468 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.725 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.725 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:21.725 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.725 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.725 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:21.725 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:21.725 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:21.725 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:21.983 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.983 "name": "pt3", 00:17:21.983 "aliases": [ 00:17:21.983 "00000000-0000-0000-0000-000000000003" 00:17:21.983 ], 00:17:21.983 "product_name": "passthru", 00:17:21.983 "block_size": 512, 00:17:21.983 "num_blocks": 65536, 00:17:21.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.983 "assigned_rate_limits": { 00:17:21.983 "rw_ios_per_sec": 0, 00:17:21.983 "rw_mbytes_per_sec": 0, 00:17:21.983 "r_mbytes_per_sec": 0, 00:17:21.983 "w_mbytes_per_sec": 0 00:17:21.983 }, 00:17:21.983 "claimed": true, 00:17:21.983 "claim_type": "exclusive_write", 00:17:21.983 "zoned": false, 00:17:21.983 "supported_io_types": { 00:17:21.983 "read": true, 00:17:21.983 "write": true, 00:17:21.983 "unmap": true, 00:17:21.983 "flush": true, 00:17:21.983 "reset": true, 00:17:21.983 "nvme_admin": false, 00:17:21.983 "nvme_io": false, 00:17:21.983 "nvme_io_md": false, 00:17:21.983 "write_zeroes": true, 00:17:21.983 "zcopy": true, 00:17:21.983 "get_zone_info": false, 00:17:21.983 "zone_management": false, 00:17:21.983 "zone_append": false, 00:17:21.983 "compare": false, 00:17:21.983 "compare_and_write": false, 00:17:21.983 "abort": true, 00:17:21.983 "seek_hole": false, 00:17:21.983 "seek_data": false, 00:17:21.983 "copy": true, 00:17:21.983 "nvme_iov_md": false 00:17:21.983 }, 00:17:21.983 "memory_domains": [ 00:17:21.983 { 00:17:21.983 "dma_device_id": "system", 00:17:21.983 "dma_device_type": 1 00:17:21.983 }, 00:17:21.983 { 00:17:21.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.983 "dma_device_type": 2 00:17:21.983 } 00:17:21.983 ], 00:17:21.983 "driver_specific": { 00:17:21.983 "passthru": { 00:17:21.983 "name": "pt3", 00:17:21.983 "base_bdev_name": "malloc3" 00:17:21.983 } 00:17:21.983 } 00:17:21.983 }' 00:17:21.983 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.983 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.983 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:21.983 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.983 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.241 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:22.241 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.241 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.241 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:22.241 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.241 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.241 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:22.241 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:22.241 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:22.241 10:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:22.499 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:22.499 "name": "pt4", 00:17:22.499 "aliases": [ 00:17:22.499 "00000000-0000-0000-0000-000000000004" 00:17:22.499 ], 00:17:22.499 "product_name": "passthru", 00:17:22.499 "block_size": 512, 00:17:22.499 "num_blocks": 65536, 00:17:22.499 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:22.499 "assigned_rate_limits": { 00:17:22.499 "rw_ios_per_sec": 0, 00:17:22.499 "rw_mbytes_per_sec": 0, 00:17:22.499 "r_mbytes_per_sec": 0, 00:17:22.499 "w_mbytes_per_sec": 0 00:17:22.499 }, 00:17:22.499 "claimed": true, 00:17:22.499 "claim_type": "exclusive_write", 00:17:22.499 "zoned": false, 00:17:22.499 "supported_io_types": { 00:17:22.499 "read": true, 00:17:22.499 "write": true, 00:17:22.499 "unmap": true, 00:17:22.499 "flush": true, 00:17:22.499 "reset": true, 00:17:22.499 "nvme_admin": false, 00:17:22.499 "nvme_io": false, 00:17:22.499 "nvme_io_md": false, 00:17:22.499 "write_zeroes": true, 00:17:22.499 "zcopy": true, 00:17:22.499 "get_zone_info": false, 00:17:22.499 "zone_management": false, 00:17:22.499 "zone_append": false, 00:17:22.499 "compare": false, 00:17:22.499 "compare_and_write": false, 00:17:22.499 "abort": true, 00:17:22.499 "seek_hole": false, 00:17:22.499 "seek_data": false, 00:17:22.499 "copy": true, 00:17:22.499 "nvme_iov_md": false 00:17:22.499 }, 00:17:22.499 "memory_domains": [ 00:17:22.499 { 00:17:22.499 "dma_device_id": "system", 00:17:22.499 "dma_device_type": 1 00:17:22.499 }, 00:17:22.499 { 00:17:22.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.499 "dma_device_type": 2 00:17:22.499 } 00:17:22.499 ], 00:17:22.499 "driver_specific": { 00:17:22.499 "passthru": { 00:17:22.499 "name": "pt4", 00:17:22.499 "base_bdev_name": "malloc4" 00:17:22.499 } 00:17:22.499 } 00:17:22.499 }' 00:17:22.499 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.499 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.499 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:22.499 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.499 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.756 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:22.756 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.756 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.756 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:22.756 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.756 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.756 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:22.756 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:22.756 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:23.014 [2024-07-25 10:32:26.611185] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.014 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=31b43814-dabd-45be-9077-894593244e55 00:17:23.014 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 31b43814-dabd-45be-9077-894593244e55 ']' 00:17:23.014 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:23.301 [2024-07-25 10:32:26.867622] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.301 [2024-07-25 10:32:26.867649] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.301 [2024-07-25 10:32:26.867730] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.301 [2024-07-25 10:32:26.867804] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.301 [2024-07-25 10:32:26.867817] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1060980 name raid_bdev1, state offline 00:17:23.301 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.301 10:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:23.559 10:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:23.559 10:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:23.559 10:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:23.559 10:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:23.816 10:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:23.816 10:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:24.074 10:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:24.074 10:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:24.332 10:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:24.332 10:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:24.590 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:24.590 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:24.848 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:24.848 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:24.848 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:24.848 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:24.848 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:24.848 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.849 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:24.849 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.849 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:24.849 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.849 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:24.849 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:17:24.849 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:25.107 [2024-07-25 10:32:28.652374] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:25.107 [2024-07-25 10:32:28.653695] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:25.107 [2024-07-25 10:32:28.653739] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:25.107 [2024-07-25 10:32:28.653778] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:25.107 [2024-07-25 10:32:28.653838] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:25.107 [2024-07-25 10:32:28.653904] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:25.107 [2024-07-25 10:32:28.653932] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:25.107 [2024-07-25 10:32:28.653958] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:25.107 [2024-07-25 10:32:28.653977] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.107 [2024-07-25 10:32:28.653989] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12021b0 name raid_bdev1, state configuring 00:17:25.107 request: 00:17:25.107 { 00:17:25.107 "name": "raid_bdev1", 00:17:25.107 "raid_level": "raid0", 00:17:25.107 "base_bdevs": [ 00:17:25.107 "malloc1", 00:17:25.107 "malloc2", 00:17:25.107 "malloc3", 00:17:25.107 "malloc4" 00:17:25.107 ], 00:17:25.107 "strip_size_kb": 64, 00:17:25.107 "superblock": false, 00:17:25.107 "method": "bdev_raid_create", 00:17:25.107 "req_id": 1 00:17:25.107 } 00:17:25.107 Got JSON-RPC error response 00:17:25.107 response: 00:17:25.107 { 00:17:25.107 "code": -17, 00:17:25.107 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:25.107 } 00:17:25.107 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:25.107 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:25.107 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:25.107 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:25.107 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.107 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:25.365 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:25.365 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:25.365 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:25.624 [2024-07-25 10:32:29.149590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:25.624 [2024-07-25 10:32:29.149649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.624 [2024-07-25 10:32:29.149671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1201700 00:17:25.624 [2024-07-25 10:32:29.149683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.624 [2024-07-25 10:32:29.151259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.624 [2024-07-25 10:32:29.151282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:25.624 [2024-07-25 10:32:29.151369] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:25.624 [2024-07-25 10:32:29.151401] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:25.624 pt1 00:17:25.624 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:25.624 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:25.624 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:25.624 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:25.624 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:25.624 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:25.624 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.624 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.624 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.624 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.624 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.624 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.882 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.882 "name": "raid_bdev1", 00:17:25.882 "uuid": "31b43814-dabd-45be-9077-894593244e55", 00:17:25.882 "strip_size_kb": 64, 00:17:25.882 "state": "configuring", 00:17:25.882 "raid_level": "raid0", 00:17:25.882 "superblock": true, 00:17:25.882 "num_base_bdevs": 4, 00:17:25.882 "num_base_bdevs_discovered": 1, 00:17:25.882 "num_base_bdevs_operational": 4, 00:17:25.882 "base_bdevs_list": [ 00:17:25.882 { 00:17:25.882 "name": "pt1", 00:17:25.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.882 "is_configured": true, 00:17:25.882 "data_offset": 2048, 00:17:25.882 "data_size": 63488 00:17:25.882 }, 00:17:25.882 { 00:17:25.883 "name": null, 00:17:25.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.883 "is_configured": false, 00:17:25.883 "data_offset": 2048, 00:17:25.883 "data_size": 63488 00:17:25.883 }, 00:17:25.883 { 00:17:25.883 "name": null, 00:17:25.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:25.883 "is_configured": false, 00:17:25.883 "data_offset": 2048, 00:17:25.883 "data_size": 63488 00:17:25.883 }, 00:17:25.883 { 00:17:25.883 "name": null, 00:17:25.883 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:25.883 "is_configured": false, 00:17:25.883 "data_offset": 2048, 00:17:25.883 "data_size": 63488 00:17:25.883 } 00:17:25.883 ] 00:17:25.883 }' 00:17:25.883 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.883 10:32:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.448 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:17:26.448 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:26.706 [2024-07-25 10:32:30.224484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:26.706 [2024-07-25 10:32:30.224548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.706 [2024-07-25 10:32:30.224573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x120b4b0 00:17:26.706 [2024-07-25 10:32:30.224587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.706 [2024-07-25 10:32:30.225039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.706 [2024-07-25 10:32:30.225065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:26.706 [2024-07-25 10:32:30.225168] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:26.706 [2024-07-25 10:32:30.225197] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:26.706 pt2 00:17:26.706 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:26.965 [2024-07-25 10:32:30.501212] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:26.965 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:26.965 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:26.965 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:26.965 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:26.965 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:26.965 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:26.965 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.965 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.965 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.965 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.965 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.965 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.223 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:27.223 "name": "raid_bdev1", 00:17:27.223 "uuid": "31b43814-dabd-45be-9077-894593244e55", 00:17:27.223 "strip_size_kb": 64, 00:17:27.223 "state": "configuring", 00:17:27.223 "raid_level": "raid0", 00:17:27.223 "superblock": true, 00:17:27.223 "num_base_bdevs": 4, 00:17:27.223 "num_base_bdevs_discovered": 1, 00:17:27.223 "num_base_bdevs_operational": 4, 00:17:27.223 "base_bdevs_list": [ 00:17:27.223 { 00:17:27.223 "name": "pt1", 00:17:27.223 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:27.223 "is_configured": true, 00:17:27.223 "data_offset": 2048, 00:17:27.223 "data_size": 63488 00:17:27.223 }, 00:17:27.223 { 00:17:27.223 "name": null, 00:17:27.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:27.223 "is_configured": false, 00:17:27.223 "data_offset": 2048, 00:17:27.223 "data_size": 63488 00:17:27.223 }, 00:17:27.223 { 00:17:27.223 "name": null, 00:17:27.223 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:27.223 "is_configured": false, 00:17:27.223 "data_offset": 2048, 00:17:27.223 "data_size": 63488 00:17:27.223 }, 00:17:27.223 { 00:17:27.223 "name": null, 00:17:27.223 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:27.223 "is_configured": false, 00:17:27.223 "data_offset": 2048, 00:17:27.223 "data_size": 63488 00:17:27.223 } 00:17:27.223 ] 00:17:27.223 }' 00:17:27.223 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:27.223 10:32:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.787 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:27.787 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:27.787 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:28.045 [2024-07-25 10:32:31.592169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:28.045 [2024-07-25 10:32:31.592244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.045 [2024-07-25 10:32:31.592269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10615c0 00:17:28.045 [2024-07-25 10:32:31.592284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.045 [2024-07-25 10:32:31.592701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.045 [2024-07-25 10:32:31.592727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:28.045 [2024-07-25 10:32:31.592814] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:28.045 [2024-07-25 10:32:31.592843] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:28.045 pt2 00:17:28.045 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:28.045 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:28.045 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:28.302 [2024-07-25 10:32:31.852832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:28.302 [2024-07-25 10:32:31.852877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.302 [2024-07-25 10:32:31.852893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1069950 00:17:28.302 [2024-07-25 10:32:31.852904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.302 [2024-07-25 10:32:31.853177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.302 [2024-07-25 10:32:31.853199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:28.302 [2024-07-25 10:32:31.853246] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:28.302 [2024-07-25 10:32:31.853266] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:28.302 pt3 00:17:28.302 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:28.302 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:28.302 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:28.560 [2024-07-25 10:32:32.093459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:28.560 [2024-07-25 10:32:32.093487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.560 [2024-07-25 10:32:32.093518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1060300 00:17:28.560 [2024-07-25 10:32:32.093529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.560 [2024-07-25 10:32:32.093742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.560 [2024-07-25 10:32:32.093763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:28.560 [2024-07-25 10:32:32.093806] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:28.560 [2024-07-25 10:32:32.093827] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:28.560 [2024-07-25 10:32:32.093929] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1061860 00:17:28.560 [2024-07-25 10:32:32.093942] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:28.560 [2024-07-25 10:32:32.094093] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1067900 00:17:28.560 [2024-07-25 10:32:32.094241] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1061860 00:17:28.560 [2024-07-25 10:32:32.094260] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1061860 00:17:28.560 [2024-07-25 10:32:32.094347] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.560 pt4 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.560 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.819 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.819 "name": "raid_bdev1", 00:17:28.819 "uuid": "31b43814-dabd-45be-9077-894593244e55", 00:17:28.819 "strip_size_kb": 64, 00:17:28.819 "state": "online", 00:17:28.819 "raid_level": "raid0", 00:17:28.819 "superblock": true, 00:17:28.819 "num_base_bdevs": 4, 00:17:28.819 "num_base_bdevs_discovered": 4, 00:17:28.819 "num_base_bdevs_operational": 4, 00:17:28.819 "base_bdevs_list": [ 00:17:28.819 { 00:17:28.819 "name": "pt1", 00:17:28.819 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:28.819 "is_configured": true, 00:17:28.819 "data_offset": 2048, 00:17:28.819 "data_size": 63488 00:17:28.819 }, 00:17:28.819 { 00:17:28.819 "name": "pt2", 00:17:28.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:28.819 "is_configured": true, 00:17:28.819 "data_offset": 2048, 00:17:28.819 "data_size": 63488 00:17:28.819 }, 00:17:28.819 { 00:17:28.819 "name": "pt3", 00:17:28.819 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:28.819 "is_configured": true, 00:17:28.819 "data_offset": 2048, 00:17:28.819 "data_size": 63488 00:17:28.819 }, 00:17:28.819 { 00:17:28.819 "name": "pt4", 00:17:28.819 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:28.819 "is_configured": true, 00:17:28.819 "data_offset": 2048, 00:17:28.819 "data_size": 63488 00:17:28.819 } 00:17:28.819 ] 00:17:28.819 }' 00:17:28.819 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.819 10:32:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.385 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:29.385 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:29.385 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:29.385 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:29.385 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:29.385 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:29.385 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:29.385 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:29.643 [2024-07-25 10:32:33.108413] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.643 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:29.643 "name": "raid_bdev1", 00:17:29.643 "aliases": [ 00:17:29.643 "31b43814-dabd-45be-9077-894593244e55" 00:17:29.643 ], 00:17:29.643 "product_name": "Raid Volume", 00:17:29.643 "block_size": 512, 00:17:29.643 "num_blocks": 253952, 00:17:29.643 "uuid": "31b43814-dabd-45be-9077-894593244e55", 00:17:29.643 "assigned_rate_limits": { 00:17:29.643 "rw_ios_per_sec": 0, 00:17:29.643 "rw_mbytes_per_sec": 0, 00:17:29.643 "r_mbytes_per_sec": 0, 00:17:29.643 "w_mbytes_per_sec": 0 00:17:29.643 }, 00:17:29.643 "claimed": false, 00:17:29.643 "zoned": false, 00:17:29.643 "supported_io_types": { 00:17:29.643 "read": true, 00:17:29.643 "write": true, 00:17:29.643 "unmap": true, 00:17:29.643 "flush": true, 00:17:29.643 "reset": true, 00:17:29.643 "nvme_admin": false, 00:17:29.643 "nvme_io": false, 00:17:29.643 "nvme_io_md": false, 00:17:29.644 "write_zeroes": true, 00:17:29.644 "zcopy": false, 00:17:29.644 "get_zone_info": false, 00:17:29.644 "zone_management": false, 00:17:29.644 "zone_append": false, 00:17:29.644 "compare": false, 00:17:29.644 "compare_and_write": false, 00:17:29.644 "abort": false, 00:17:29.644 "seek_hole": false, 00:17:29.644 "seek_data": false, 00:17:29.644 "copy": false, 00:17:29.644 "nvme_iov_md": false 00:17:29.644 }, 00:17:29.644 "memory_domains": [ 00:17:29.644 { 00:17:29.644 "dma_device_id": "system", 00:17:29.644 "dma_device_type": 1 00:17:29.644 }, 00:17:29.644 { 00:17:29.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.644 "dma_device_type": 2 00:17:29.644 }, 00:17:29.644 { 00:17:29.644 "dma_device_id": "system", 00:17:29.644 "dma_device_type": 1 00:17:29.644 }, 00:17:29.644 { 00:17:29.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.644 "dma_device_type": 2 00:17:29.644 }, 00:17:29.644 { 00:17:29.644 "dma_device_id": "system", 00:17:29.644 "dma_device_type": 1 00:17:29.644 }, 00:17:29.644 { 00:17:29.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.644 "dma_device_type": 2 00:17:29.644 }, 00:17:29.644 { 00:17:29.644 "dma_device_id": "system", 00:17:29.644 "dma_device_type": 1 00:17:29.644 }, 00:17:29.644 { 00:17:29.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.644 "dma_device_type": 2 00:17:29.644 } 00:17:29.644 ], 00:17:29.644 "driver_specific": { 00:17:29.644 "raid": { 00:17:29.644 "uuid": "31b43814-dabd-45be-9077-894593244e55", 00:17:29.644 "strip_size_kb": 64, 00:17:29.644 "state": "online", 00:17:29.644 "raid_level": "raid0", 00:17:29.644 "superblock": true, 00:17:29.644 "num_base_bdevs": 4, 00:17:29.644 "num_base_bdevs_discovered": 4, 00:17:29.644 "num_base_bdevs_operational": 4, 00:17:29.644 "base_bdevs_list": [ 00:17:29.644 { 00:17:29.644 "name": "pt1", 00:17:29.644 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.644 "is_configured": true, 00:17:29.644 "data_offset": 2048, 00:17:29.644 "data_size": 63488 00:17:29.644 }, 00:17:29.644 { 00:17:29.644 "name": "pt2", 00:17:29.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.644 "is_configured": true, 00:17:29.644 "data_offset": 2048, 00:17:29.644 "data_size": 63488 00:17:29.644 }, 00:17:29.644 { 00:17:29.644 "name": "pt3", 00:17:29.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:29.644 "is_configured": true, 00:17:29.644 "data_offset": 2048, 00:17:29.644 "data_size": 63488 00:17:29.644 }, 00:17:29.644 { 00:17:29.644 "name": "pt4", 00:17:29.644 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:29.644 "is_configured": true, 00:17:29.644 "data_offset": 2048, 00:17:29.644 "data_size": 63488 00:17:29.644 } 00:17:29.644 ] 00:17:29.644 } 00:17:29.644 } 00:17:29.644 }' 00:17:29.644 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.644 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:29.644 pt2 00:17:29.644 pt3 00:17:29.644 pt4' 00:17:29.644 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:29.644 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:29.644 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:29.901 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:29.901 "name": "pt1", 00:17:29.901 "aliases": [ 00:17:29.901 "00000000-0000-0000-0000-000000000001" 00:17:29.901 ], 00:17:29.901 "product_name": "passthru", 00:17:29.901 "block_size": 512, 00:17:29.901 "num_blocks": 65536, 00:17:29.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.901 "assigned_rate_limits": { 00:17:29.901 "rw_ios_per_sec": 0, 00:17:29.901 "rw_mbytes_per_sec": 0, 00:17:29.901 "r_mbytes_per_sec": 0, 00:17:29.901 "w_mbytes_per_sec": 0 00:17:29.901 }, 00:17:29.901 "claimed": true, 00:17:29.901 "claim_type": "exclusive_write", 00:17:29.901 "zoned": false, 00:17:29.901 "supported_io_types": { 00:17:29.901 "read": true, 00:17:29.901 "write": true, 00:17:29.901 "unmap": true, 00:17:29.901 "flush": true, 00:17:29.901 "reset": true, 00:17:29.901 "nvme_admin": false, 00:17:29.901 "nvme_io": false, 00:17:29.901 "nvme_io_md": false, 00:17:29.901 "write_zeroes": true, 00:17:29.901 "zcopy": true, 00:17:29.901 "get_zone_info": false, 00:17:29.901 "zone_management": false, 00:17:29.901 "zone_append": false, 00:17:29.901 "compare": false, 00:17:29.901 "compare_and_write": false, 00:17:29.901 "abort": true, 00:17:29.901 "seek_hole": false, 00:17:29.901 "seek_data": false, 00:17:29.901 "copy": true, 00:17:29.901 "nvme_iov_md": false 00:17:29.901 }, 00:17:29.901 "memory_domains": [ 00:17:29.901 { 00:17:29.901 "dma_device_id": "system", 00:17:29.901 "dma_device_type": 1 00:17:29.901 }, 00:17:29.901 { 00:17:29.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.901 "dma_device_type": 2 00:17:29.901 } 00:17:29.901 ], 00:17:29.901 "driver_specific": { 00:17:29.901 "passthru": { 00:17:29.901 "name": "pt1", 00:17:29.901 "base_bdev_name": "malloc1" 00:17:29.901 } 00:17:29.901 } 00:17:29.901 }' 00:17:29.901 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.901 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.901 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:29.901 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.901 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.901 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:29.901 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.901 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.901 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:29.901 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:30.158 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:30.158 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:30.158 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:30.158 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:30.158 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:30.415 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:30.415 "name": "pt2", 00:17:30.415 "aliases": [ 00:17:30.415 "00000000-0000-0000-0000-000000000002" 00:17:30.415 ], 00:17:30.415 "product_name": "passthru", 00:17:30.415 "block_size": 512, 00:17:30.415 "num_blocks": 65536, 00:17:30.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.415 "assigned_rate_limits": { 00:17:30.415 "rw_ios_per_sec": 0, 00:17:30.415 "rw_mbytes_per_sec": 0, 00:17:30.415 "r_mbytes_per_sec": 0, 00:17:30.415 "w_mbytes_per_sec": 0 00:17:30.415 }, 00:17:30.415 "claimed": true, 00:17:30.415 "claim_type": "exclusive_write", 00:17:30.415 "zoned": false, 00:17:30.415 "supported_io_types": { 00:17:30.415 "read": true, 00:17:30.415 "write": true, 00:17:30.415 "unmap": true, 00:17:30.415 "flush": true, 00:17:30.415 "reset": true, 00:17:30.415 "nvme_admin": false, 00:17:30.415 "nvme_io": false, 00:17:30.415 "nvme_io_md": false, 00:17:30.415 "write_zeroes": true, 00:17:30.415 "zcopy": true, 00:17:30.415 "get_zone_info": false, 00:17:30.415 "zone_management": false, 00:17:30.415 "zone_append": false, 00:17:30.415 "compare": false, 00:17:30.415 "compare_and_write": false, 00:17:30.415 "abort": true, 00:17:30.415 "seek_hole": false, 00:17:30.415 "seek_data": false, 00:17:30.415 "copy": true, 00:17:30.415 "nvme_iov_md": false 00:17:30.415 }, 00:17:30.415 "memory_domains": [ 00:17:30.415 { 00:17:30.415 "dma_device_id": "system", 00:17:30.415 "dma_device_type": 1 00:17:30.415 }, 00:17:30.415 { 00:17:30.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.415 "dma_device_type": 2 00:17:30.415 } 00:17:30.415 ], 00:17:30.415 "driver_specific": { 00:17:30.415 "passthru": { 00:17:30.415 "name": "pt2", 00:17:30.415 "base_bdev_name": "malloc2" 00:17:30.415 } 00:17:30.415 } 00:17:30.415 }' 00:17:30.415 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:30.415 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:30.415 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:30.415 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:30.415 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:30.415 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:30.415 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:30.415 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:30.673 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:30.673 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:30.673 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:30.673 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:30.674 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:30.674 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:30.674 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:30.932 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:30.932 "name": "pt3", 00:17:30.932 "aliases": [ 00:17:30.932 "00000000-0000-0000-0000-000000000003" 00:17:30.932 ], 00:17:30.932 "product_name": "passthru", 00:17:30.932 "block_size": 512, 00:17:30.932 "num_blocks": 65536, 00:17:30.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:30.932 "assigned_rate_limits": { 00:17:30.932 "rw_ios_per_sec": 0, 00:17:30.932 "rw_mbytes_per_sec": 0, 00:17:30.932 "r_mbytes_per_sec": 0, 00:17:30.932 "w_mbytes_per_sec": 0 00:17:30.932 }, 00:17:30.932 "claimed": true, 00:17:30.932 "claim_type": "exclusive_write", 00:17:30.932 "zoned": false, 00:17:30.932 "supported_io_types": { 00:17:30.932 "read": true, 00:17:30.932 "write": true, 00:17:30.932 "unmap": true, 00:17:30.932 "flush": true, 00:17:30.932 "reset": true, 00:17:30.932 "nvme_admin": false, 00:17:30.932 "nvme_io": false, 00:17:30.932 "nvme_io_md": false, 00:17:30.932 "write_zeroes": true, 00:17:30.932 "zcopy": true, 00:17:30.932 "get_zone_info": false, 00:17:30.932 "zone_management": false, 00:17:30.932 "zone_append": false, 00:17:30.932 "compare": false, 00:17:30.932 "compare_and_write": false, 00:17:30.932 "abort": true, 00:17:30.932 "seek_hole": false, 00:17:30.932 "seek_data": false, 00:17:30.932 "copy": true, 00:17:30.932 "nvme_iov_md": false 00:17:30.932 }, 00:17:30.932 "memory_domains": [ 00:17:30.932 { 00:17:30.932 "dma_device_id": "system", 00:17:30.932 "dma_device_type": 1 00:17:30.932 }, 00:17:30.932 { 00:17:30.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.932 "dma_device_type": 2 00:17:30.932 } 00:17:30.932 ], 00:17:30.932 "driver_specific": { 00:17:30.932 "passthru": { 00:17:30.932 "name": "pt3", 00:17:30.932 "base_bdev_name": "malloc3" 00:17:30.932 } 00:17:30.932 } 00:17:30.932 }' 00:17:30.932 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:30.932 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:30.932 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:30.932 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:30.932 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:30.932 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:30.932 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:30.932 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.190 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:31.190 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.190 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.190 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:31.190 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:31.190 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:31.190 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:31.448 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:31.448 "name": "pt4", 00:17:31.448 "aliases": [ 00:17:31.448 "00000000-0000-0000-0000-000000000004" 00:17:31.448 ], 00:17:31.448 "product_name": "passthru", 00:17:31.448 "block_size": 512, 00:17:31.448 "num_blocks": 65536, 00:17:31.448 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:31.448 "assigned_rate_limits": { 00:17:31.448 "rw_ios_per_sec": 0, 00:17:31.448 "rw_mbytes_per_sec": 0, 00:17:31.448 "r_mbytes_per_sec": 0, 00:17:31.448 "w_mbytes_per_sec": 0 00:17:31.448 }, 00:17:31.448 "claimed": true, 00:17:31.448 "claim_type": "exclusive_write", 00:17:31.448 "zoned": false, 00:17:31.448 "supported_io_types": { 00:17:31.448 "read": true, 00:17:31.448 "write": true, 00:17:31.448 "unmap": true, 00:17:31.448 "flush": true, 00:17:31.448 "reset": true, 00:17:31.448 "nvme_admin": false, 00:17:31.448 "nvme_io": false, 00:17:31.448 "nvme_io_md": false, 00:17:31.448 "write_zeroes": true, 00:17:31.448 "zcopy": true, 00:17:31.448 "get_zone_info": false, 00:17:31.448 "zone_management": false, 00:17:31.448 "zone_append": false, 00:17:31.448 "compare": false, 00:17:31.448 "compare_and_write": false, 00:17:31.448 "abort": true, 00:17:31.448 "seek_hole": false, 00:17:31.448 "seek_data": false, 00:17:31.448 "copy": true, 00:17:31.448 "nvme_iov_md": false 00:17:31.448 }, 00:17:31.448 "memory_domains": [ 00:17:31.448 { 00:17:31.448 "dma_device_id": "system", 00:17:31.448 "dma_device_type": 1 00:17:31.448 }, 00:17:31.448 { 00:17:31.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.448 "dma_device_type": 2 00:17:31.448 } 00:17:31.448 ], 00:17:31.448 "driver_specific": { 00:17:31.448 "passthru": { 00:17:31.448 "name": "pt4", 00:17:31.448 "base_bdev_name": "malloc4" 00:17:31.448 } 00:17:31.448 } 00:17:31.448 }' 00:17:31.448 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.448 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.448 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:31.448 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.448 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.448 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:31.448 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.706 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.706 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:31.706 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.706 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.706 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:31.706 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:31.706 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:31.964 [2024-07-25 10:32:35.482782] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 31b43814-dabd-45be-9077-894593244e55 '!=' 31b43814-dabd-45be-9077-894593244e55 ']' 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2397394 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2397394 ']' 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2397394 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2397394 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2397394' 00:17:31.964 killing process with pid 2397394 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2397394 00:17:31.964 [2024-07-25 10:32:35.531769] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.964 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2397394 00:17:31.964 [2024-07-25 10:32:35.531860] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.964 [2024-07-25 10:32:35.531951] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.964 [2024-07-25 10:32:35.531968] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1061860 name raid_bdev1, state offline 00:17:31.964 [2024-07-25 10:32:35.581036] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.223 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:32.223 00:17:32.223 real 0m16.432s 00:17:32.223 user 0m30.195s 00:17:32.223 sys 0m2.246s 00:17:32.223 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.223 10:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.223 ************************************ 00:17:32.223 END TEST raid_superblock_test 00:17:32.223 ************************************ 00:17:32.223 10:32:35 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:17:32.223 10:32:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:32.223 10:32:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.223 10:32:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.223 ************************************ 00:17:32.223 START TEST raid_read_error_test 00:17:32.223 ************************************ 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ULPvviaTWB 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2399637 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2399637 /var/tmp/spdk-raid.sock 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2399637 ']' 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:32.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.223 10:32:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.482 [2024-07-25 10:32:35.956069] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:17:32.482 [2024-07-25 10:32:35.956157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399637 ] 00:17:32.482 [2024-07-25 10:32:36.031759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.482 [2024-07-25 10:32:36.140675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.740 [2024-07-25 10:32:36.214842] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.740 [2024-07-25 10:32:36.214874] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.354 10:32:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.354 10:32:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:33.354 10:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:33.354 10:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:33.612 BaseBdev1_malloc 00:17:33.612 10:32:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:33.870 true 00:17:33.870 10:32:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:34.127 [2024-07-25 10:32:37.723594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:34.127 [2024-07-25 10:32:37.723644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.127 [2024-07-25 10:32:37.723667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1af3250 00:17:34.127 [2024-07-25 10:32:37.723682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.127 [2024-07-25 10:32:37.725269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.127 [2024-07-25 10:32:37.725297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:34.127 BaseBdev1 00:17:34.127 10:32:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:34.128 10:32:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:34.385 BaseBdev2_malloc 00:17:34.386 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:34.643 true 00:17:34.643 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:34.902 [2024-07-25 10:32:38.517889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:34.902 [2024-07-25 10:32:38.517948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.902 [2024-07-25 10:32:38.517976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ae2650 00:17:34.902 [2024-07-25 10:32:38.517992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.902 [2024-07-25 10:32:38.519794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.902 [2024-07-25 10:32:38.519828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:34.902 BaseBdev2 00:17:34.902 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:34.902 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:35.159 BaseBdev3_malloc 00:17:35.159 10:32:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:35.417 true 00:17:35.417 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:35.675 [2024-07-25 10:32:39.255060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:35.675 [2024-07-25 10:32:39.255115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.675 [2024-07-25 10:32:39.255140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ad85d0 00:17:35.675 [2024-07-25 10:32:39.255155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.675 [2024-07-25 10:32:39.256588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.675 [2024-07-25 10:32:39.256616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:35.675 BaseBdev3 00:17:35.675 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:35.675 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:35.934 BaseBdev4_malloc 00:17:35.934 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:36.192 true 00:17:36.192 10:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:36.450 [2024-07-25 10:32:40.000302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:36.450 [2024-07-25 10:32:40.000354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.450 [2024-07-25 10:32:40.000378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1938d10 00:17:36.450 [2024-07-25 10:32:40.000394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.450 [2024-07-25 10:32:40.001827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.450 [2024-07-25 10:32:40.001855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:36.450 BaseBdev4 00:17:36.450 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:36.708 [2024-07-25 10:32:40.257071] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.708 [2024-07-25 10:32:40.258490] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.708 [2024-07-25 10:32:40.258557] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.708 [2024-07-25 10:32:40.258622] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:36.708 [2024-07-25 10:32:40.258865] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x193a8c0 00:17:36.708 [2024-07-25 10:32:40.258879] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:36.708 [2024-07-25 10:32:40.259095] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x193b240 00:17:36.708 [2024-07-25 10:32:40.259294] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x193a8c0 00:17:36.708 [2024-07-25 10:32:40.259314] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x193a8c0 00:17:36.708 [2024-07-25 10:32:40.259463] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.708 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:36.708 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:36.708 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:36.708 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:36.708 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:36.708 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:36.708 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.708 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.708 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.708 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.708 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.708 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.967 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.967 "name": "raid_bdev1", 00:17:36.967 "uuid": "d527a050-2615-44ad-aff5-52caa0c9ec89", 00:17:36.967 "strip_size_kb": 64, 00:17:36.967 "state": "online", 00:17:36.967 "raid_level": "raid0", 00:17:36.967 "superblock": true, 00:17:36.967 "num_base_bdevs": 4, 00:17:36.967 "num_base_bdevs_discovered": 4, 00:17:36.967 "num_base_bdevs_operational": 4, 00:17:36.967 "base_bdevs_list": [ 00:17:36.967 { 00:17:36.967 "name": "BaseBdev1", 00:17:36.967 "uuid": "08dd3857-05e0-5179-9f6b-320bc122aff7", 00:17:36.967 "is_configured": true, 00:17:36.967 "data_offset": 2048, 00:17:36.967 "data_size": 63488 00:17:36.967 }, 00:17:36.967 { 00:17:36.967 "name": "BaseBdev2", 00:17:36.967 "uuid": "99e9ac35-8023-516b-9dba-37e2f140a057", 00:17:36.967 "is_configured": true, 00:17:36.967 "data_offset": 2048, 00:17:36.967 "data_size": 63488 00:17:36.967 }, 00:17:36.967 { 00:17:36.967 "name": "BaseBdev3", 00:17:36.967 "uuid": "786f6491-c950-5bcd-a06d-65ea53c0504c", 00:17:36.967 "is_configured": true, 00:17:36.967 "data_offset": 2048, 00:17:36.967 "data_size": 63488 00:17:36.967 }, 00:17:36.967 { 00:17:36.967 "name": "BaseBdev4", 00:17:36.967 "uuid": "694c1df8-e3fe-5f18-99bc-c7225539faed", 00:17:36.967 "is_configured": true, 00:17:36.967 "data_offset": 2048, 00:17:36.967 "data_size": 63488 00:17:36.967 } 00:17:36.967 ] 00:17:36.967 }' 00:17:36.967 10:32:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.967 10:32:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.534 10:32:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:37.534 10:32:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:37.534 [2024-07-25 10:32:41.159820] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x193db00 00:17:38.469 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.727 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.985 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.985 "name": "raid_bdev1", 00:17:38.985 "uuid": "d527a050-2615-44ad-aff5-52caa0c9ec89", 00:17:38.985 "strip_size_kb": 64, 00:17:38.985 "state": "online", 00:17:38.985 "raid_level": "raid0", 00:17:38.985 "superblock": true, 00:17:38.985 "num_base_bdevs": 4, 00:17:38.985 "num_base_bdevs_discovered": 4, 00:17:38.985 "num_base_bdevs_operational": 4, 00:17:38.985 "base_bdevs_list": [ 00:17:38.985 { 00:17:38.985 "name": "BaseBdev1", 00:17:38.985 "uuid": "08dd3857-05e0-5179-9f6b-320bc122aff7", 00:17:38.985 "is_configured": true, 00:17:38.985 "data_offset": 2048, 00:17:38.985 "data_size": 63488 00:17:38.985 }, 00:17:38.985 { 00:17:38.985 "name": "BaseBdev2", 00:17:38.985 "uuid": "99e9ac35-8023-516b-9dba-37e2f140a057", 00:17:38.985 "is_configured": true, 00:17:38.985 "data_offset": 2048, 00:17:38.985 "data_size": 63488 00:17:38.985 }, 00:17:38.985 { 00:17:38.985 "name": "BaseBdev3", 00:17:38.985 "uuid": "786f6491-c950-5bcd-a06d-65ea53c0504c", 00:17:38.985 "is_configured": true, 00:17:38.985 "data_offset": 2048, 00:17:38.985 "data_size": 63488 00:17:38.985 }, 00:17:38.985 { 00:17:38.985 "name": "BaseBdev4", 00:17:38.985 "uuid": "694c1df8-e3fe-5f18-99bc-c7225539faed", 00:17:38.985 "is_configured": true, 00:17:38.985 "data_offset": 2048, 00:17:38.985 "data_size": 63488 00:17:38.985 } 00:17:38.985 ] 00:17:38.985 }' 00:17:38.985 10:32:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.985 10:32:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.552 10:32:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:39.888 [2024-07-25 10:32:43.395954] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.888 [2024-07-25 10:32:43.396000] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.888 [2024-07-25 10:32:43.398495] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.888 [2024-07-25 10:32:43.398537] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.888 [2024-07-25 10:32:43.398573] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.888 [2024-07-25 10:32:43.398584] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x193a8c0 name raid_bdev1, state offline 00:17:39.888 0 00:17:39.888 10:32:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2399637 00:17:39.888 10:32:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2399637 ']' 00:17:39.888 10:32:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2399637 00:17:39.888 10:32:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:17:39.888 10:32:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:39.888 10:32:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2399637 00:17:39.888 10:32:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:39.888 10:32:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:39.888 10:32:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2399637' 00:17:39.888 killing process with pid 2399637 00:17:39.888 10:32:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2399637 00:17:39.888 [2024-07-25 10:32:43.440038] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.888 10:32:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2399637 00:17:39.888 [2024-07-25 10:32:43.483762] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.146 10:32:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ULPvviaTWB 00:17:40.147 10:32:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:40.147 10:32:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:40.147 10:32:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:17:40.147 10:32:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:40.147 10:32:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:40.147 10:32:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:40.147 10:32:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:17:40.147 00:17:40.147 real 0m7.894s 00:17:40.147 user 0m12.764s 00:17:40.147 sys 0m1.159s 00:17:40.147 10:32:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:40.147 10:32:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.147 ************************************ 00:17:40.147 END TEST raid_read_error_test 00:17:40.147 ************************************ 00:17:40.147 10:32:43 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:17:40.147 10:32:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:40.147 10:32:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:40.147 10:32:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:40.147 ************************************ 00:17:40.147 START TEST raid_write_error_test 00:17:40.147 ************************************ 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.wyIdNcuKfM 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2400671 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2400671 /var/tmp/spdk-raid.sock 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2400671 ']' 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:40.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.147 10:32:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.405 [2024-07-25 10:32:43.902364] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:17:40.405 [2024-07-25 10:32:43.902436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400671 ] 00:17:40.405 [2024-07-25 10:32:43.988928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.405 [2024-07-25 10:32:44.112041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.663 [2024-07-25 10:32:44.189969] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.663 [2024-07-25 10:32:44.190006] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.229 10:32:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.229 10:32:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:41.229 10:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:41.229 10:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:41.487 BaseBdev1_malloc 00:17:41.487 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:41.745 true 00:17:42.004 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:42.004 [2024-07-25 10:32:45.688716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:42.004 [2024-07-25 10:32:45.688788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.004 [2024-07-25 10:32:45.688813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2040250 00:17:42.004 [2024-07-25 10:32:45.688826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.004 [2024-07-25 10:32:45.690485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.004 [2024-07-25 10:32:45.690508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:42.004 BaseBdev1 00:17:42.004 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:42.004 10:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:42.570 BaseBdev2_malloc 00:17:42.570 10:32:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:42.828 true 00:17:42.828 10:32:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:43.086 [2024-07-25 10:32:46.574851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:43.086 [2024-07-25 10:32:46.574901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.086 [2024-07-25 10:32:46.574924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x202f650 00:17:43.086 [2024-07-25 10:32:46.574940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.086 [2024-07-25 10:32:46.576376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.086 [2024-07-25 10:32:46.576410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:43.086 BaseBdev2 00:17:43.086 10:32:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:43.086 10:32:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:43.345 BaseBdev3_malloc 00:17:43.345 10:32:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:43.602 true 00:17:43.602 10:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:43.860 [2024-07-25 10:32:47.384747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:43.860 [2024-07-25 10:32:47.384795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.860 [2024-07-25 10:32:47.384818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20255d0 00:17:43.860 [2024-07-25 10:32:47.384833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.860 [2024-07-25 10:32:47.386278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.860 [2024-07-25 10:32:47.386306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:43.860 BaseBdev3 00:17:43.860 10:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:43.860 10:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:44.118 BaseBdev4_malloc 00:17:44.118 10:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:44.375 true 00:17:44.375 10:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:44.632 [2024-07-25 10:32:48.170345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:44.632 [2024-07-25 10:32:48.170395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.632 [2024-07-25 10:32:48.170418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e85d10 00:17:44.632 [2024-07-25 10:32:48.170434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.632 [2024-07-25 10:32:48.171828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.632 [2024-07-25 10:32:48.171865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:44.632 BaseBdev4 00:17:44.632 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:44.889 [2024-07-25 10:32:48.411027] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.889 [2024-07-25 10:32:48.412237] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.889 [2024-07-25 10:32:48.412316] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:44.889 [2024-07-25 10:32:48.412393] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:44.889 [2024-07-25 10:32:48.412643] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e878c0 00:17:44.889 [2024-07-25 10:32:48.412661] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:44.889 [2024-07-25 10:32:48.412841] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e88240 00:17:44.889 [2024-07-25 10:32:48.413019] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e878c0 00:17:44.889 [2024-07-25 10:32:48.413036] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e878c0 00:17:44.889 [2024-07-25 10:32:48.413157] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.889 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:44.889 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:44.889 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:44.889 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:44.889 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:44.889 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:44.889 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.889 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.889 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.889 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.889 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.889 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.147 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:45.147 "name": "raid_bdev1", 00:17:45.147 "uuid": "469f0ff2-62f3-4ff5-90ab-d9d7e6dd4169", 00:17:45.147 "strip_size_kb": 64, 00:17:45.147 "state": "online", 00:17:45.147 "raid_level": "raid0", 00:17:45.147 "superblock": true, 00:17:45.147 "num_base_bdevs": 4, 00:17:45.147 "num_base_bdevs_discovered": 4, 00:17:45.147 "num_base_bdevs_operational": 4, 00:17:45.147 "base_bdevs_list": [ 00:17:45.147 { 00:17:45.147 "name": "BaseBdev1", 00:17:45.147 "uuid": "f5c0d3a1-8d65-579d-a1ea-ebdaae12b7d6", 00:17:45.147 "is_configured": true, 00:17:45.147 "data_offset": 2048, 00:17:45.147 "data_size": 63488 00:17:45.147 }, 00:17:45.147 { 00:17:45.147 "name": "BaseBdev2", 00:17:45.147 "uuid": "584e66b3-63c0-578b-9bf7-f0e75af831f0", 00:17:45.147 "is_configured": true, 00:17:45.147 "data_offset": 2048, 00:17:45.147 "data_size": 63488 00:17:45.147 }, 00:17:45.147 { 00:17:45.147 "name": "BaseBdev3", 00:17:45.147 "uuid": "dc3456b9-b068-5c8e-82d3-e4e8d411a97e", 00:17:45.147 "is_configured": true, 00:17:45.147 "data_offset": 2048, 00:17:45.147 "data_size": 63488 00:17:45.147 }, 00:17:45.147 { 00:17:45.147 "name": "BaseBdev4", 00:17:45.147 "uuid": "f81b0a2f-9d31-57e7-abdd-dff16f91199d", 00:17:45.147 "is_configured": true, 00:17:45.147 "data_offset": 2048, 00:17:45.147 "data_size": 63488 00:17:45.147 } 00:17:45.147 ] 00:17:45.147 }' 00:17:45.147 10:32:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:45.147 10:32:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.712 10:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:45.712 10:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:45.712 [2024-07-25 10:32:49.337921] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e8ab00 00:17:46.646 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:46.904 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:46.904 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:46.904 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:17:46.904 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:46.904 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:46.904 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:46.905 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:46.905 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:46.905 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:46.905 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.905 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.905 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.905 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.905 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.905 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.162 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.162 "name": "raid_bdev1", 00:17:47.163 "uuid": "469f0ff2-62f3-4ff5-90ab-d9d7e6dd4169", 00:17:47.163 "strip_size_kb": 64, 00:17:47.163 "state": "online", 00:17:47.163 "raid_level": "raid0", 00:17:47.163 "superblock": true, 00:17:47.163 "num_base_bdevs": 4, 00:17:47.163 "num_base_bdevs_discovered": 4, 00:17:47.163 "num_base_bdevs_operational": 4, 00:17:47.163 "base_bdevs_list": [ 00:17:47.163 { 00:17:47.163 "name": "BaseBdev1", 00:17:47.163 "uuid": "f5c0d3a1-8d65-579d-a1ea-ebdaae12b7d6", 00:17:47.163 "is_configured": true, 00:17:47.163 "data_offset": 2048, 00:17:47.163 "data_size": 63488 00:17:47.163 }, 00:17:47.163 { 00:17:47.163 "name": "BaseBdev2", 00:17:47.163 "uuid": "584e66b3-63c0-578b-9bf7-f0e75af831f0", 00:17:47.163 "is_configured": true, 00:17:47.163 "data_offset": 2048, 00:17:47.163 "data_size": 63488 00:17:47.163 }, 00:17:47.163 { 00:17:47.163 "name": "BaseBdev3", 00:17:47.163 "uuid": "dc3456b9-b068-5c8e-82d3-e4e8d411a97e", 00:17:47.163 "is_configured": true, 00:17:47.163 "data_offset": 2048, 00:17:47.163 "data_size": 63488 00:17:47.163 }, 00:17:47.163 { 00:17:47.163 "name": "BaseBdev4", 00:17:47.163 "uuid": "f81b0a2f-9d31-57e7-abdd-dff16f91199d", 00:17:47.163 "is_configured": true, 00:17:47.163 "data_offset": 2048, 00:17:47.163 "data_size": 63488 00:17:47.163 } 00:17:47.163 ] 00:17:47.163 }' 00:17:47.163 10:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.163 10:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.728 10:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:47.986 [2024-07-25 10:32:51.562022] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:47.986 [2024-07-25 10:32:51.562080] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.986 [2024-07-25 10:32:51.565073] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.986 [2024-07-25 10:32:51.565141] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.986 [2024-07-25 10:32:51.565188] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.986 [2024-07-25 10:32:51.565213] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e878c0 name raid_bdev1, state offline 00:17:47.986 0 00:17:47.986 10:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2400671 00:17:47.986 10:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2400671 ']' 00:17:47.986 10:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2400671 00:17:47.986 10:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:17:47.986 10:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:47.986 10:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2400671 00:17:47.986 10:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:47.986 10:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:47.986 10:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2400671' 00:17:47.986 killing process with pid 2400671 00:17:47.986 10:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2400671 00:17:47.986 [2024-07-25 10:32:51.612670] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.986 10:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2400671 00:17:47.986 [2024-07-25 10:32:51.656815] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:48.244 10:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.wyIdNcuKfM 00:17:48.502 10:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:48.502 10:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:48.502 10:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:17:48.502 10:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:48.502 10:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:48.502 10:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:48.502 10:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:17:48.502 00:17:48.502 real 0m8.119s 00:17:48.502 user 0m13.276s 00:17:48.502 sys 0m1.118s 00:17:48.502 10:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:48.502 10:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.502 ************************************ 00:17:48.502 END TEST raid_write_error_test 00:17:48.502 ************************************ 00:17:48.502 10:32:51 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:48.502 10:32:51 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:17:48.502 10:32:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:48.502 10:32:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:48.502 10:32:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:48.502 ************************************ 00:17:48.502 START TEST raid_state_function_test 00:17:48.502 ************************************ 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2401708 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2401708' 00:17:48.502 Process raid pid: 2401708 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2401708 /var/tmp/spdk-raid.sock 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2401708 ']' 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:48.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:48.502 10:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.502 [2024-07-25 10:32:52.072456] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:17:48.502 [2024-07-25 10:32:52.072526] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.502 [2024-07-25 10:32:52.150421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.760 [2024-07-25 10:32:52.258317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.760 [2024-07-25 10:32:52.324938] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.760 [2024-07-25 10:32:52.324975] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:49.693 [2024-07-25 10:32:53.326138] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:49.693 [2024-07-25 10:32:53.326186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:49.693 [2024-07-25 10:32:53.326199] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.693 [2024-07-25 10:32:53.326213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.693 [2024-07-25 10:32:53.326222] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:49.693 [2024-07-25 10:32:53.326235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:49.693 [2024-07-25 10:32:53.326244] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:49.693 [2024-07-25 10:32:53.326257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.693 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.951 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.951 "name": "Existed_Raid", 00:17:49.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.951 "strip_size_kb": 64, 00:17:49.951 "state": "configuring", 00:17:49.951 "raid_level": "concat", 00:17:49.951 "superblock": false, 00:17:49.951 "num_base_bdevs": 4, 00:17:49.951 "num_base_bdevs_discovered": 0, 00:17:49.951 "num_base_bdevs_operational": 4, 00:17:49.951 "base_bdevs_list": [ 00:17:49.951 { 00:17:49.951 "name": "BaseBdev1", 00:17:49.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.951 "is_configured": false, 00:17:49.951 "data_offset": 0, 00:17:49.951 "data_size": 0 00:17:49.951 }, 00:17:49.951 { 00:17:49.951 "name": "BaseBdev2", 00:17:49.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.952 "is_configured": false, 00:17:49.952 "data_offset": 0, 00:17:49.952 "data_size": 0 00:17:49.952 }, 00:17:49.952 { 00:17:49.952 "name": "BaseBdev3", 00:17:49.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.952 "is_configured": false, 00:17:49.952 "data_offset": 0, 00:17:49.952 "data_size": 0 00:17:49.952 }, 00:17:49.952 { 00:17:49.952 "name": "BaseBdev4", 00:17:49.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.952 "is_configured": false, 00:17:49.952 "data_offset": 0, 00:17:49.952 "data_size": 0 00:17:49.952 } 00:17:49.952 ] 00:17:49.952 }' 00:17:49.952 10:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.952 10:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.516 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:50.774 [2024-07-25 10:32:54.388831] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:50.774 [2024-07-25 10:32:54.388866] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a4b640 name Existed_Raid, state configuring 00:17:50.774 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:51.032 [2024-07-25 10:32:54.637495] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:51.032 [2024-07-25 10:32:54.637532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:51.032 [2024-07-25 10:32:54.637544] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:51.032 [2024-07-25 10:32:54.637558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:51.032 [2024-07-25 10:32:54.637568] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:51.032 [2024-07-25 10:32:54.637581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:51.032 [2024-07-25 10:32:54.637590] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:51.032 [2024-07-25 10:32:54.637614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:51.032 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:51.290 [2024-07-25 10:32:54.890441] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.290 BaseBdev1 00:17:51.290 10:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:51.290 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:51.290 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:51.290 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:51.290 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:51.290 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:51.290 10:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:51.547 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:51.805 [ 00:17:51.805 { 00:17:51.805 "name": "BaseBdev1", 00:17:51.805 "aliases": [ 00:17:51.805 "aaf43f6b-d8e5-4893-ad8e-967ed072d611" 00:17:51.805 ], 00:17:51.805 "product_name": "Malloc disk", 00:17:51.805 "block_size": 512, 00:17:51.805 "num_blocks": 65536, 00:17:51.805 "uuid": "aaf43f6b-d8e5-4893-ad8e-967ed072d611", 00:17:51.805 "assigned_rate_limits": { 00:17:51.805 "rw_ios_per_sec": 0, 00:17:51.805 "rw_mbytes_per_sec": 0, 00:17:51.805 "r_mbytes_per_sec": 0, 00:17:51.805 "w_mbytes_per_sec": 0 00:17:51.805 }, 00:17:51.805 "claimed": true, 00:17:51.805 "claim_type": "exclusive_write", 00:17:51.805 "zoned": false, 00:17:51.805 "supported_io_types": { 00:17:51.805 "read": true, 00:17:51.805 "write": true, 00:17:51.805 "unmap": true, 00:17:51.805 "flush": true, 00:17:51.805 "reset": true, 00:17:51.805 "nvme_admin": false, 00:17:51.805 "nvme_io": false, 00:17:51.805 "nvme_io_md": false, 00:17:51.805 "write_zeroes": true, 00:17:51.805 "zcopy": true, 00:17:51.805 "get_zone_info": false, 00:17:51.805 "zone_management": false, 00:17:51.805 "zone_append": false, 00:17:51.805 "compare": false, 00:17:51.805 "compare_and_write": false, 00:17:51.805 "abort": true, 00:17:51.805 "seek_hole": false, 00:17:51.805 "seek_data": false, 00:17:51.805 "copy": true, 00:17:51.805 "nvme_iov_md": false 00:17:51.805 }, 00:17:51.805 "memory_domains": [ 00:17:51.805 { 00:17:51.805 "dma_device_id": "system", 00:17:51.805 "dma_device_type": 1 00:17:51.805 }, 00:17:51.805 { 00:17:51.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.805 "dma_device_type": 2 00:17:51.805 } 00:17:51.805 ], 00:17:51.805 "driver_specific": {} 00:17:51.805 } 00:17:51.805 ] 00:17:51.805 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:51.805 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:51.805 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:51.805 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:51.805 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:51.806 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:51.806 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:51.806 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:51.806 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:51.806 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:51.806 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:51.806 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.806 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.064 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.064 "name": "Existed_Raid", 00:17:52.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.064 "strip_size_kb": 64, 00:17:52.064 "state": "configuring", 00:17:52.064 "raid_level": "concat", 00:17:52.064 "superblock": false, 00:17:52.064 "num_base_bdevs": 4, 00:17:52.064 "num_base_bdevs_discovered": 1, 00:17:52.064 "num_base_bdevs_operational": 4, 00:17:52.064 "base_bdevs_list": [ 00:17:52.064 { 00:17:52.064 "name": "BaseBdev1", 00:17:52.064 "uuid": "aaf43f6b-d8e5-4893-ad8e-967ed072d611", 00:17:52.064 "is_configured": true, 00:17:52.064 "data_offset": 0, 00:17:52.064 "data_size": 65536 00:17:52.064 }, 00:17:52.064 { 00:17:52.064 "name": "BaseBdev2", 00:17:52.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.064 "is_configured": false, 00:17:52.064 "data_offset": 0, 00:17:52.064 "data_size": 0 00:17:52.064 }, 00:17:52.064 { 00:17:52.064 "name": "BaseBdev3", 00:17:52.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.064 "is_configured": false, 00:17:52.064 "data_offset": 0, 00:17:52.064 "data_size": 0 00:17:52.064 }, 00:17:52.064 { 00:17:52.064 "name": "BaseBdev4", 00:17:52.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.064 "is_configured": false, 00:17:52.064 "data_offset": 0, 00:17:52.064 "data_size": 0 00:17:52.064 } 00:17:52.064 ] 00:17:52.064 }' 00:17:52.064 10:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.064 10:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.629 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:52.888 [2024-07-25 10:32:56.502666] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:52.888 [2024-07-25 10:32:56.502721] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a4ae50 name Existed_Raid, state configuring 00:17:52.888 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:53.146 [2024-07-25 10:32:56.799516] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.146 [2024-07-25 10:32:56.801065] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.146 [2024-07-25 10:32:56.801109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.146 [2024-07-25 10:32:56.801123] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:53.146 [2024-07-25 10:32:56.801138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:53.146 [2024-07-25 10:32:56.801148] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:53.146 [2024-07-25 10:32:56.801161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.146 10:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.404 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.404 "name": "Existed_Raid", 00:17:53.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.404 "strip_size_kb": 64, 00:17:53.404 "state": "configuring", 00:17:53.404 "raid_level": "concat", 00:17:53.404 "superblock": false, 00:17:53.404 "num_base_bdevs": 4, 00:17:53.404 "num_base_bdevs_discovered": 1, 00:17:53.404 "num_base_bdevs_operational": 4, 00:17:53.404 "base_bdevs_list": [ 00:17:53.404 { 00:17:53.404 "name": "BaseBdev1", 00:17:53.404 "uuid": "aaf43f6b-d8e5-4893-ad8e-967ed072d611", 00:17:53.404 "is_configured": true, 00:17:53.404 "data_offset": 0, 00:17:53.404 "data_size": 65536 00:17:53.404 }, 00:17:53.404 { 00:17:53.404 "name": "BaseBdev2", 00:17:53.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.404 "is_configured": false, 00:17:53.404 "data_offset": 0, 00:17:53.404 "data_size": 0 00:17:53.404 }, 00:17:53.404 { 00:17:53.404 "name": "BaseBdev3", 00:17:53.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.404 "is_configured": false, 00:17:53.404 "data_offset": 0, 00:17:53.404 "data_size": 0 00:17:53.404 }, 00:17:53.404 { 00:17:53.404 "name": "BaseBdev4", 00:17:53.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.404 "is_configured": false, 00:17:53.404 "data_offset": 0, 00:17:53.404 "data_size": 0 00:17:53.404 } 00:17:53.404 ] 00:17:53.404 }' 00:17:53.404 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.404 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.971 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:54.229 [2024-07-25 10:32:57.858930] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.229 BaseBdev2 00:17:54.229 10:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:54.229 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:54.229 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:54.229 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:54.229 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:54.229 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:54.229 10:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.488 10:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:54.746 [ 00:17:54.746 { 00:17:54.746 "name": "BaseBdev2", 00:17:54.746 "aliases": [ 00:17:54.746 "c6c6f45b-f3e1-4b50-a4a2-8726cde8e512" 00:17:54.746 ], 00:17:54.746 "product_name": "Malloc disk", 00:17:54.746 "block_size": 512, 00:17:54.746 "num_blocks": 65536, 00:17:54.746 "uuid": "c6c6f45b-f3e1-4b50-a4a2-8726cde8e512", 00:17:54.746 "assigned_rate_limits": { 00:17:54.746 "rw_ios_per_sec": 0, 00:17:54.746 "rw_mbytes_per_sec": 0, 00:17:54.746 "r_mbytes_per_sec": 0, 00:17:54.746 "w_mbytes_per_sec": 0 00:17:54.746 }, 00:17:54.746 "claimed": true, 00:17:54.746 "claim_type": "exclusive_write", 00:17:54.746 "zoned": false, 00:17:54.746 "supported_io_types": { 00:17:54.746 "read": true, 00:17:54.746 "write": true, 00:17:54.746 "unmap": true, 00:17:54.746 "flush": true, 00:17:54.746 "reset": true, 00:17:54.746 "nvme_admin": false, 00:17:54.746 "nvme_io": false, 00:17:54.746 "nvme_io_md": false, 00:17:54.746 "write_zeroes": true, 00:17:54.746 "zcopy": true, 00:17:54.746 "get_zone_info": false, 00:17:54.746 "zone_management": false, 00:17:54.746 "zone_append": false, 00:17:54.746 "compare": false, 00:17:54.746 "compare_and_write": false, 00:17:54.746 "abort": true, 00:17:54.746 "seek_hole": false, 00:17:54.746 "seek_data": false, 00:17:54.746 "copy": true, 00:17:54.746 "nvme_iov_md": false 00:17:54.746 }, 00:17:54.746 "memory_domains": [ 00:17:54.746 { 00:17:54.746 "dma_device_id": "system", 00:17:54.746 "dma_device_type": 1 00:17:54.746 }, 00:17:54.746 { 00:17:54.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.746 "dma_device_type": 2 00:17:54.746 } 00:17:54.746 ], 00:17:54.746 "driver_specific": {} 00:17:54.746 } 00:17:54.746 ] 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.746 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.747 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.005 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.005 "name": "Existed_Raid", 00:17:55.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.005 "strip_size_kb": 64, 00:17:55.005 "state": "configuring", 00:17:55.005 "raid_level": "concat", 00:17:55.005 "superblock": false, 00:17:55.005 "num_base_bdevs": 4, 00:17:55.005 "num_base_bdevs_discovered": 2, 00:17:55.005 "num_base_bdevs_operational": 4, 00:17:55.005 "base_bdevs_list": [ 00:17:55.005 { 00:17:55.005 "name": "BaseBdev1", 00:17:55.005 "uuid": "aaf43f6b-d8e5-4893-ad8e-967ed072d611", 00:17:55.005 "is_configured": true, 00:17:55.005 "data_offset": 0, 00:17:55.005 "data_size": 65536 00:17:55.005 }, 00:17:55.005 { 00:17:55.005 "name": "BaseBdev2", 00:17:55.005 "uuid": "c6c6f45b-f3e1-4b50-a4a2-8726cde8e512", 00:17:55.005 "is_configured": true, 00:17:55.005 "data_offset": 0, 00:17:55.005 "data_size": 65536 00:17:55.005 }, 00:17:55.005 { 00:17:55.005 "name": "BaseBdev3", 00:17:55.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.005 "is_configured": false, 00:17:55.005 "data_offset": 0, 00:17:55.005 "data_size": 0 00:17:55.005 }, 00:17:55.005 { 00:17:55.005 "name": "BaseBdev4", 00:17:55.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.005 "is_configured": false, 00:17:55.005 "data_offset": 0, 00:17:55.005 "data_size": 0 00:17:55.005 } 00:17:55.005 ] 00:17:55.005 }' 00:17:55.005 10:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.005 10:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.571 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:55.829 [2024-07-25 10:32:59.413394] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:55.829 BaseBdev3 00:17:55.829 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:55.829 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:55.829 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:55.829 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:55.829 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:55.829 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:55.829 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.087 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:56.346 [ 00:17:56.346 { 00:17:56.346 "name": "BaseBdev3", 00:17:56.346 "aliases": [ 00:17:56.346 "600df52a-928e-465b-90ff-88aed414fe08" 00:17:56.346 ], 00:17:56.346 "product_name": "Malloc disk", 00:17:56.346 "block_size": 512, 00:17:56.346 "num_blocks": 65536, 00:17:56.346 "uuid": "600df52a-928e-465b-90ff-88aed414fe08", 00:17:56.346 "assigned_rate_limits": { 00:17:56.346 "rw_ios_per_sec": 0, 00:17:56.346 "rw_mbytes_per_sec": 0, 00:17:56.346 "r_mbytes_per_sec": 0, 00:17:56.346 "w_mbytes_per_sec": 0 00:17:56.346 }, 00:17:56.346 "claimed": true, 00:17:56.346 "claim_type": "exclusive_write", 00:17:56.346 "zoned": false, 00:17:56.346 "supported_io_types": { 00:17:56.346 "read": true, 00:17:56.346 "write": true, 00:17:56.346 "unmap": true, 00:17:56.346 "flush": true, 00:17:56.346 "reset": true, 00:17:56.346 "nvme_admin": false, 00:17:56.346 "nvme_io": false, 00:17:56.346 "nvme_io_md": false, 00:17:56.346 "write_zeroes": true, 00:17:56.346 "zcopy": true, 00:17:56.346 "get_zone_info": false, 00:17:56.346 "zone_management": false, 00:17:56.346 "zone_append": false, 00:17:56.346 "compare": false, 00:17:56.346 "compare_and_write": false, 00:17:56.346 "abort": true, 00:17:56.346 "seek_hole": false, 00:17:56.346 "seek_data": false, 00:17:56.346 "copy": true, 00:17:56.346 "nvme_iov_md": false 00:17:56.346 }, 00:17:56.346 "memory_domains": [ 00:17:56.346 { 00:17:56.346 "dma_device_id": "system", 00:17:56.346 "dma_device_type": 1 00:17:56.346 }, 00:17:56.346 { 00:17:56.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.346 "dma_device_type": 2 00:17:56.346 } 00:17:56.346 ], 00:17:56.346 "driver_specific": {} 00:17:56.346 } 00:17:56.346 ] 00:17:56.346 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:56.346 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:56.346 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:56.346 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:56.346 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:56.346 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:56.346 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:56.346 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:56.346 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:56.346 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.346 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.346 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.347 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.347 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.347 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.653 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.653 "name": "Existed_Raid", 00:17:56.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.653 "strip_size_kb": 64, 00:17:56.653 "state": "configuring", 00:17:56.653 "raid_level": "concat", 00:17:56.653 "superblock": false, 00:17:56.653 "num_base_bdevs": 4, 00:17:56.653 "num_base_bdevs_discovered": 3, 00:17:56.653 "num_base_bdevs_operational": 4, 00:17:56.653 "base_bdevs_list": [ 00:17:56.653 { 00:17:56.653 "name": "BaseBdev1", 00:17:56.653 "uuid": "aaf43f6b-d8e5-4893-ad8e-967ed072d611", 00:17:56.653 "is_configured": true, 00:17:56.653 "data_offset": 0, 00:17:56.653 "data_size": 65536 00:17:56.653 }, 00:17:56.653 { 00:17:56.653 "name": "BaseBdev2", 00:17:56.653 "uuid": "c6c6f45b-f3e1-4b50-a4a2-8726cde8e512", 00:17:56.653 "is_configured": true, 00:17:56.653 "data_offset": 0, 00:17:56.653 "data_size": 65536 00:17:56.653 }, 00:17:56.653 { 00:17:56.653 "name": "BaseBdev3", 00:17:56.653 "uuid": "600df52a-928e-465b-90ff-88aed414fe08", 00:17:56.653 "is_configured": true, 00:17:56.653 "data_offset": 0, 00:17:56.653 "data_size": 65536 00:17:56.653 }, 00:17:56.653 { 00:17:56.653 "name": "BaseBdev4", 00:17:56.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.653 "is_configured": false, 00:17:56.653 "data_offset": 0, 00:17:56.653 "data_size": 0 00:17:56.653 } 00:17:56.653 ] 00:17:56.653 }' 00:17:56.653 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.653 10:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.218 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:57.475 [2024-07-25 10:33:00.955796] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:57.475 [2024-07-25 10:33:00.955851] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a4bcb0 00:17:57.475 [2024-07-25 10:33:00.955861] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:57.475 [2024-07-25 10:33:00.956056] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bf4f00 00:17:57.475 [2024-07-25 10:33:00.956234] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a4bcb0 00:17:57.475 [2024-07-25 10:33:00.956251] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1a4bcb0 00:17:57.475 [2024-07-25 10:33:00.956459] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.475 BaseBdev4 00:17:57.475 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:57.475 10:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:57.475 10:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:57.475 10:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:57.475 10:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:57.475 10:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:57.475 10:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:57.733 10:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:57.991 [ 00:17:57.991 { 00:17:57.991 "name": "BaseBdev4", 00:17:57.991 "aliases": [ 00:17:57.991 "8fcb89f5-bcc8-472a-882f-90588ed9074b" 00:17:57.991 ], 00:17:57.991 "product_name": "Malloc disk", 00:17:57.991 "block_size": 512, 00:17:57.991 "num_blocks": 65536, 00:17:57.991 "uuid": "8fcb89f5-bcc8-472a-882f-90588ed9074b", 00:17:57.991 "assigned_rate_limits": { 00:17:57.991 "rw_ios_per_sec": 0, 00:17:57.991 "rw_mbytes_per_sec": 0, 00:17:57.991 "r_mbytes_per_sec": 0, 00:17:57.991 "w_mbytes_per_sec": 0 00:17:57.991 }, 00:17:57.991 "claimed": true, 00:17:57.991 "claim_type": "exclusive_write", 00:17:57.991 "zoned": false, 00:17:57.991 "supported_io_types": { 00:17:57.991 "read": true, 00:17:57.991 "write": true, 00:17:57.991 "unmap": true, 00:17:57.991 "flush": true, 00:17:57.991 "reset": true, 00:17:57.991 "nvme_admin": false, 00:17:57.991 "nvme_io": false, 00:17:57.991 "nvme_io_md": false, 00:17:57.991 "write_zeroes": true, 00:17:57.991 "zcopy": true, 00:17:57.991 "get_zone_info": false, 00:17:57.991 "zone_management": false, 00:17:57.991 "zone_append": false, 00:17:57.991 "compare": false, 00:17:57.991 "compare_and_write": false, 00:17:57.991 "abort": true, 00:17:57.991 "seek_hole": false, 00:17:57.991 "seek_data": false, 00:17:57.991 "copy": true, 00:17:57.991 "nvme_iov_md": false 00:17:57.991 }, 00:17:57.991 "memory_domains": [ 00:17:57.991 { 00:17:57.991 "dma_device_id": "system", 00:17:57.991 "dma_device_type": 1 00:17:57.991 }, 00:17:57.991 { 00:17:57.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.991 "dma_device_type": 2 00:17:57.991 } 00:17:57.991 ], 00:17:57.991 "driver_specific": {} 00:17:57.991 } 00:17:57.991 ] 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.991 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.249 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.250 "name": "Existed_Raid", 00:17:58.250 "uuid": "153139fc-36a2-4fbe-a6b9-be673f14f841", 00:17:58.250 "strip_size_kb": 64, 00:17:58.250 "state": "online", 00:17:58.250 "raid_level": "concat", 00:17:58.250 "superblock": false, 00:17:58.250 "num_base_bdevs": 4, 00:17:58.250 "num_base_bdevs_discovered": 4, 00:17:58.250 "num_base_bdevs_operational": 4, 00:17:58.250 "base_bdevs_list": [ 00:17:58.250 { 00:17:58.250 "name": "BaseBdev1", 00:17:58.250 "uuid": "aaf43f6b-d8e5-4893-ad8e-967ed072d611", 00:17:58.250 "is_configured": true, 00:17:58.250 "data_offset": 0, 00:17:58.250 "data_size": 65536 00:17:58.250 }, 00:17:58.250 { 00:17:58.250 "name": "BaseBdev2", 00:17:58.250 "uuid": "c6c6f45b-f3e1-4b50-a4a2-8726cde8e512", 00:17:58.250 "is_configured": true, 00:17:58.250 "data_offset": 0, 00:17:58.250 "data_size": 65536 00:17:58.250 }, 00:17:58.250 { 00:17:58.250 "name": "BaseBdev3", 00:17:58.250 "uuid": "600df52a-928e-465b-90ff-88aed414fe08", 00:17:58.250 "is_configured": true, 00:17:58.250 "data_offset": 0, 00:17:58.250 "data_size": 65536 00:17:58.250 }, 00:17:58.250 { 00:17:58.250 "name": "BaseBdev4", 00:17:58.250 "uuid": "8fcb89f5-bcc8-472a-882f-90588ed9074b", 00:17:58.250 "is_configured": true, 00:17:58.250 "data_offset": 0, 00:17:58.250 "data_size": 65536 00:17:58.250 } 00:17:58.250 ] 00:17:58.250 }' 00:17:58.250 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.250 10:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.815 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:58.815 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:58.815 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:58.815 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:58.815 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:58.815 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:58.815 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:58.815 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:59.074 [2024-07-25 10:33:02.528337] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.074 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:59.074 "name": "Existed_Raid", 00:17:59.074 "aliases": [ 00:17:59.074 "153139fc-36a2-4fbe-a6b9-be673f14f841" 00:17:59.074 ], 00:17:59.074 "product_name": "Raid Volume", 00:17:59.074 "block_size": 512, 00:17:59.074 "num_blocks": 262144, 00:17:59.074 "uuid": "153139fc-36a2-4fbe-a6b9-be673f14f841", 00:17:59.074 "assigned_rate_limits": { 00:17:59.074 "rw_ios_per_sec": 0, 00:17:59.074 "rw_mbytes_per_sec": 0, 00:17:59.074 "r_mbytes_per_sec": 0, 00:17:59.074 "w_mbytes_per_sec": 0 00:17:59.074 }, 00:17:59.074 "claimed": false, 00:17:59.074 "zoned": false, 00:17:59.074 "supported_io_types": { 00:17:59.074 "read": true, 00:17:59.074 "write": true, 00:17:59.074 "unmap": true, 00:17:59.074 "flush": true, 00:17:59.074 "reset": true, 00:17:59.074 "nvme_admin": false, 00:17:59.074 "nvme_io": false, 00:17:59.074 "nvme_io_md": false, 00:17:59.074 "write_zeroes": true, 00:17:59.074 "zcopy": false, 00:17:59.074 "get_zone_info": false, 00:17:59.074 "zone_management": false, 00:17:59.074 "zone_append": false, 00:17:59.074 "compare": false, 00:17:59.074 "compare_and_write": false, 00:17:59.074 "abort": false, 00:17:59.074 "seek_hole": false, 00:17:59.074 "seek_data": false, 00:17:59.074 "copy": false, 00:17:59.074 "nvme_iov_md": false 00:17:59.074 }, 00:17:59.074 "memory_domains": [ 00:17:59.074 { 00:17:59.074 "dma_device_id": "system", 00:17:59.074 "dma_device_type": 1 00:17:59.074 }, 00:17:59.074 { 00:17:59.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.074 "dma_device_type": 2 00:17:59.074 }, 00:17:59.074 { 00:17:59.074 "dma_device_id": "system", 00:17:59.074 "dma_device_type": 1 00:17:59.074 }, 00:17:59.074 { 00:17:59.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.074 "dma_device_type": 2 00:17:59.074 }, 00:17:59.074 { 00:17:59.074 "dma_device_id": "system", 00:17:59.074 "dma_device_type": 1 00:17:59.074 }, 00:17:59.074 { 00:17:59.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.074 "dma_device_type": 2 00:17:59.074 }, 00:17:59.074 { 00:17:59.074 "dma_device_id": "system", 00:17:59.074 "dma_device_type": 1 00:17:59.074 }, 00:17:59.074 { 00:17:59.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.074 "dma_device_type": 2 00:17:59.074 } 00:17:59.074 ], 00:17:59.074 "driver_specific": { 00:17:59.074 "raid": { 00:17:59.074 "uuid": "153139fc-36a2-4fbe-a6b9-be673f14f841", 00:17:59.074 "strip_size_kb": 64, 00:17:59.074 "state": "online", 00:17:59.074 "raid_level": "concat", 00:17:59.074 "superblock": false, 00:17:59.074 "num_base_bdevs": 4, 00:17:59.074 "num_base_bdevs_discovered": 4, 00:17:59.074 "num_base_bdevs_operational": 4, 00:17:59.074 "base_bdevs_list": [ 00:17:59.074 { 00:17:59.074 "name": "BaseBdev1", 00:17:59.074 "uuid": "aaf43f6b-d8e5-4893-ad8e-967ed072d611", 00:17:59.074 "is_configured": true, 00:17:59.074 "data_offset": 0, 00:17:59.074 "data_size": 65536 00:17:59.074 }, 00:17:59.074 { 00:17:59.074 "name": "BaseBdev2", 00:17:59.074 "uuid": "c6c6f45b-f3e1-4b50-a4a2-8726cde8e512", 00:17:59.074 "is_configured": true, 00:17:59.074 "data_offset": 0, 00:17:59.074 "data_size": 65536 00:17:59.074 }, 00:17:59.074 { 00:17:59.074 "name": "BaseBdev3", 00:17:59.075 "uuid": "600df52a-928e-465b-90ff-88aed414fe08", 00:17:59.075 "is_configured": true, 00:17:59.075 "data_offset": 0, 00:17:59.075 "data_size": 65536 00:17:59.075 }, 00:17:59.075 { 00:17:59.075 "name": "BaseBdev4", 00:17:59.075 "uuid": "8fcb89f5-bcc8-472a-882f-90588ed9074b", 00:17:59.075 "is_configured": true, 00:17:59.075 "data_offset": 0, 00:17:59.075 "data_size": 65536 00:17:59.075 } 00:17:59.075 ] 00:17:59.075 } 00:17:59.075 } 00:17:59.075 }' 00:17:59.075 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.075 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:59.075 BaseBdev2 00:17:59.075 BaseBdev3 00:17:59.075 BaseBdev4' 00:17:59.075 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:59.075 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:59.075 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:59.333 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:59.333 "name": "BaseBdev1", 00:17:59.334 "aliases": [ 00:17:59.334 "aaf43f6b-d8e5-4893-ad8e-967ed072d611" 00:17:59.334 ], 00:17:59.334 "product_name": "Malloc disk", 00:17:59.334 "block_size": 512, 00:17:59.334 "num_blocks": 65536, 00:17:59.334 "uuid": "aaf43f6b-d8e5-4893-ad8e-967ed072d611", 00:17:59.334 "assigned_rate_limits": { 00:17:59.334 "rw_ios_per_sec": 0, 00:17:59.334 "rw_mbytes_per_sec": 0, 00:17:59.334 "r_mbytes_per_sec": 0, 00:17:59.334 "w_mbytes_per_sec": 0 00:17:59.334 }, 00:17:59.334 "claimed": true, 00:17:59.334 "claim_type": "exclusive_write", 00:17:59.334 "zoned": false, 00:17:59.334 "supported_io_types": { 00:17:59.334 "read": true, 00:17:59.334 "write": true, 00:17:59.334 "unmap": true, 00:17:59.334 "flush": true, 00:17:59.334 "reset": true, 00:17:59.334 "nvme_admin": false, 00:17:59.334 "nvme_io": false, 00:17:59.334 "nvme_io_md": false, 00:17:59.334 "write_zeroes": true, 00:17:59.334 "zcopy": true, 00:17:59.334 "get_zone_info": false, 00:17:59.334 "zone_management": false, 00:17:59.334 "zone_append": false, 00:17:59.334 "compare": false, 00:17:59.334 "compare_and_write": false, 00:17:59.334 "abort": true, 00:17:59.334 "seek_hole": false, 00:17:59.334 "seek_data": false, 00:17:59.334 "copy": true, 00:17:59.334 "nvme_iov_md": false 00:17:59.334 }, 00:17:59.334 "memory_domains": [ 00:17:59.334 { 00:17:59.334 "dma_device_id": "system", 00:17:59.334 "dma_device_type": 1 00:17:59.334 }, 00:17:59.334 { 00:17:59.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.334 "dma_device_type": 2 00:17:59.334 } 00:17:59.334 ], 00:17:59.334 "driver_specific": {} 00:17:59.334 }' 00:17:59.334 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.334 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.334 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:59.334 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.334 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.334 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:59.334 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.334 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.592 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:59.592 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.592 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.592 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:59.592 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:59.592 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:59.592 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:59.850 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:59.850 "name": "BaseBdev2", 00:17:59.850 "aliases": [ 00:17:59.850 "c6c6f45b-f3e1-4b50-a4a2-8726cde8e512" 00:17:59.850 ], 00:17:59.850 "product_name": "Malloc disk", 00:17:59.850 "block_size": 512, 00:17:59.850 "num_blocks": 65536, 00:17:59.850 "uuid": "c6c6f45b-f3e1-4b50-a4a2-8726cde8e512", 00:17:59.850 "assigned_rate_limits": { 00:17:59.850 "rw_ios_per_sec": 0, 00:17:59.850 "rw_mbytes_per_sec": 0, 00:17:59.850 "r_mbytes_per_sec": 0, 00:17:59.850 "w_mbytes_per_sec": 0 00:17:59.850 }, 00:17:59.850 "claimed": true, 00:17:59.850 "claim_type": "exclusive_write", 00:17:59.850 "zoned": false, 00:17:59.850 "supported_io_types": { 00:17:59.850 "read": true, 00:17:59.850 "write": true, 00:17:59.850 "unmap": true, 00:17:59.850 "flush": true, 00:17:59.850 "reset": true, 00:17:59.850 "nvme_admin": false, 00:17:59.850 "nvme_io": false, 00:17:59.850 "nvme_io_md": false, 00:17:59.850 "write_zeroes": true, 00:17:59.850 "zcopy": true, 00:17:59.850 "get_zone_info": false, 00:17:59.850 "zone_management": false, 00:17:59.850 "zone_append": false, 00:17:59.850 "compare": false, 00:17:59.850 "compare_and_write": false, 00:17:59.850 "abort": true, 00:17:59.850 "seek_hole": false, 00:17:59.850 "seek_data": false, 00:17:59.850 "copy": true, 00:17:59.850 "nvme_iov_md": false 00:17:59.850 }, 00:17:59.850 "memory_domains": [ 00:17:59.850 { 00:17:59.850 "dma_device_id": "system", 00:17:59.850 "dma_device_type": 1 00:17:59.850 }, 00:17:59.850 { 00:17:59.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.850 "dma_device_type": 2 00:17:59.850 } 00:17:59.850 ], 00:17:59.850 "driver_specific": {} 00:17:59.850 }' 00:17:59.850 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.850 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.850 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:59.850 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.850 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.850 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:59.850 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.108 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.108 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:00.108 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.108 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.108 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:00.108 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:00.108 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:00.108 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:00.366 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:00.366 "name": "BaseBdev3", 00:18:00.366 "aliases": [ 00:18:00.366 "600df52a-928e-465b-90ff-88aed414fe08" 00:18:00.366 ], 00:18:00.366 "product_name": "Malloc disk", 00:18:00.366 "block_size": 512, 00:18:00.366 "num_blocks": 65536, 00:18:00.366 "uuid": "600df52a-928e-465b-90ff-88aed414fe08", 00:18:00.366 "assigned_rate_limits": { 00:18:00.366 "rw_ios_per_sec": 0, 00:18:00.366 "rw_mbytes_per_sec": 0, 00:18:00.366 "r_mbytes_per_sec": 0, 00:18:00.366 "w_mbytes_per_sec": 0 00:18:00.366 }, 00:18:00.366 "claimed": true, 00:18:00.366 "claim_type": "exclusive_write", 00:18:00.366 "zoned": false, 00:18:00.366 "supported_io_types": { 00:18:00.366 "read": true, 00:18:00.366 "write": true, 00:18:00.366 "unmap": true, 00:18:00.366 "flush": true, 00:18:00.366 "reset": true, 00:18:00.366 "nvme_admin": false, 00:18:00.366 "nvme_io": false, 00:18:00.366 "nvme_io_md": false, 00:18:00.366 "write_zeroes": true, 00:18:00.366 "zcopy": true, 00:18:00.366 "get_zone_info": false, 00:18:00.366 "zone_management": false, 00:18:00.366 "zone_append": false, 00:18:00.366 "compare": false, 00:18:00.366 "compare_and_write": false, 00:18:00.366 "abort": true, 00:18:00.366 "seek_hole": false, 00:18:00.366 "seek_data": false, 00:18:00.366 "copy": true, 00:18:00.366 "nvme_iov_md": false 00:18:00.366 }, 00:18:00.366 "memory_domains": [ 00:18:00.366 { 00:18:00.366 "dma_device_id": "system", 00:18:00.366 "dma_device_type": 1 00:18:00.366 }, 00:18:00.366 { 00:18:00.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.366 "dma_device_type": 2 00:18:00.366 } 00:18:00.366 ], 00:18:00.366 "driver_specific": {} 00:18:00.366 }' 00:18:00.366 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.366 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.366 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:00.366 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.366 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.624 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:00.624 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.624 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.624 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:00.624 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.624 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.624 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:00.624 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:00.624 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:00.624 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:00.883 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:00.883 "name": "BaseBdev4", 00:18:00.883 "aliases": [ 00:18:00.883 "8fcb89f5-bcc8-472a-882f-90588ed9074b" 00:18:00.883 ], 00:18:00.883 "product_name": "Malloc disk", 00:18:00.883 "block_size": 512, 00:18:00.883 "num_blocks": 65536, 00:18:00.883 "uuid": "8fcb89f5-bcc8-472a-882f-90588ed9074b", 00:18:00.883 "assigned_rate_limits": { 00:18:00.883 "rw_ios_per_sec": 0, 00:18:00.883 "rw_mbytes_per_sec": 0, 00:18:00.883 "r_mbytes_per_sec": 0, 00:18:00.883 "w_mbytes_per_sec": 0 00:18:00.883 }, 00:18:00.883 "claimed": true, 00:18:00.883 "claim_type": "exclusive_write", 00:18:00.883 "zoned": false, 00:18:00.883 "supported_io_types": { 00:18:00.883 "read": true, 00:18:00.883 "write": true, 00:18:00.883 "unmap": true, 00:18:00.883 "flush": true, 00:18:00.883 "reset": true, 00:18:00.883 "nvme_admin": false, 00:18:00.883 "nvme_io": false, 00:18:00.883 "nvme_io_md": false, 00:18:00.883 "write_zeroes": true, 00:18:00.883 "zcopy": true, 00:18:00.883 "get_zone_info": false, 00:18:00.883 "zone_management": false, 00:18:00.883 "zone_append": false, 00:18:00.883 "compare": false, 00:18:00.883 "compare_and_write": false, 00:18:00.883 "abort": true, 00:18:00.883 "seek_hole": false, 00:18:00.883 "seek_data": false, 00:18:00.883 "copy": true, 00:18:00.883 "nvme_iov_md": false 00:18:00.883 }, 00:18:00.883 "memory_domains": [ 00:18:00.883 { 00:18:00.883 "dma_device_id": "system", 00:18:00.883 "dma_device_type": 1 00:18:00.883 }, 00:18:00.883 { 00:18:00.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.883 "dma_device_type": 2 00:18:00.883 } 00:18:00.883 ], 00:18:00.883 "driver_specific": {} 00:18:00.883 }' 00:18:00.883 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.883 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.883 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:00.883 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.883 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.141 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:01.141 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.141 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.141 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:01.141 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.141 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.141 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:01.141 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:01.401 [2024-07-25 10:33:04.978654] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:01.401 [2024-07-25 10:33:04.978688] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.401 [2024-07-25 10:33:04.978742] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.401 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:01.401 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:01.401 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:01.401 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:01.401 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:01.401 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:01.401 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:01.401 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:01.401 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:01.401 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:01.401 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:01.401 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.401 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.401 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.401 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.401 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.401 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.659 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.659 "name": "Existed_Raid", 00:18:01.659 "uuid": "153139fc-36a2-4fbe-a6b9-be673f14f841", 00:18:01.659 "strip_size_kb": 64, 00:18:01.659 "state": "offline", 00:18:01.659 "raid_level": "concat", 00:18:01.659 "superblock": false, 00:18:01.659 "num_base_bdevs": 4, 00:18:01.659 "num_base_bdevs_discovered": 3, 00:18:01.659 "num_base_bdevs_operational": 3, 00:18:01.659 "base_bdevs_list": [ 00:18:01.659 { 00:18:01.659 "name": null, 00:18:01.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.659 "is_configured": false, 00:18:01.659 "data_offset": 0, 00:18:01.659 "data_size": 65536 00:18:01.659 }, 00:18:01.659 { 00:18:01.659 "name": "BaseBdev2", 00:18:01.659 "uuid": "c6c6f45b-f3e1-4b50-a4a2-8726cde8e512", 00:18:01.659 "is_configured": true, 00:18:01.659 "data_offset": 0, 00:18:01.659 "data_size": 65536 00:18:01.659 }, 00:18:01.659 { 00:18:01.659 "name": "BaseBdev3", 00:18:01.659 "uuid": "600df52a-928e-465b-90ff-88aed414fe08", 00:18:01.659 "is_configured": true, 00:18:01.659 "data_offset": 0, 00:18:01.659 "data_size": 65536 00:18:01.659 }, 00:18:01.659 { 00:18:01.659 "name": "BaseBdev4", 00:18:01.659 "uuid": "8fcb89f5-bcc8-472a-882f-90588ed9074b", 00:18:01.659 "is_configured": true, 00:18:01.659 "data_offset": 0, 00:18:01.659 "data_size": 65536 00:18:01.659 } 00:18:01.659 ] 00:18:01.659 }' 00:18:01.659 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.659 10:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.225 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:02.225 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:02.225 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.225 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:02.484 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:02.484 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:02.484 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:02.742 [2024-07-25 10:33:06.396286] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:02.742 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:02.742 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:02.742 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.742 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:03.000 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:03.000 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.000 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:03.258 [2024-07-25 10:33:06.927254] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:03.258 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:03.258 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:03.258 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.258 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:03.516 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:03.516 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.517 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:03.775 [2024-07-25 10:33:07.433995] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:03.775 [2024-07-25 10:33:07.434057] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a4bcb0 name Existed_Raid, state offline 00:18:03.775 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:03.775 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:03.775 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.775 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:04.033 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:04.033 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:04.033 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:18:04.033 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:04.033 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:04.033 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:04.292 BaseBdev2 00:18:04.292 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:04.292 10:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:04.292 10:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:04.292 10:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:04.292 10:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:04.292 10:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:04.292 10:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.550 10:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:04.808 [ 00:18:04.808 { 00:18:04.808 "name": "BaseBdev2", 00:18:04.808 "aliases": [ 00:18:04.808 "86650403-7604-45b8-8e13-e5cb1e1f29dc" 00:18:04.808 ], 00:18:04.808 "product_name": "Malloc disk", 00:18:04.808 "block_size": 512, 00:18:04.808 "num_blocks": 65536, 00:18:04.808 "uuid": "86650403-7604-45b8-8e13-e5cb1e1f29dc", 00:18:04.808 "assigned_rate_limits": { 00:18:04.808 "rw_ios_per_sec": 0, 00:18:04.808 "rw_mbytes_per_sec": 0, 00:18:04.808 "r_mbytes_per_sec": 0, 00:18:04.808 "w_mbytes_per_sec": 0 00:18:04.808 }, 00:18:04.808 "claimed": false, 00:18:04.808 "zoned": false, 00:18:04.808 "supported_io_types": { 00:18:04.808 "read": true, 00:18:04.808 "write": true, 00:18:04.808 "unmap": true, 00:18:04.808 "flush": true, 00:18:04.808 "reset": true, 00:18:04.808 "nvme_admin": false, 00:18:04.808 "nvme_io": false, 00:18:04.808 "nvme_io_md": false, 00:18:04.808 "write_zeroes": true, 00:18:04.809 "zcopy": true, 00:18:04.809 "get_zone_info": false, 00:18:04.809 "zone_management": false, 00:18:04.809 "zone_append": false, 00:18:04.809 "compare": false, 00:18:04.809 "compare_and_write": false, 00:18:04.809 "abort": true, 00:18:04.809 "seek_hole": false, 00:18:04.809 "seek_data": false, 00:18:04.809 "copy": true, 00:18:04.809 "nvme_iov_md": false 00:18:04.809 }, 00:18:04.809 "memory_domains": [ 00:18:04.809 { 00:18:04.809 "dma_device_id": "system", 00:18:04.809 "dma_device_type": 1 00:18:04.809 }, 00:18:04.809 { 00:18:04.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.809 "dma_device_type": 2 00:18:04.809 } 00:18:04.809 ], 00:18:04.809 "driver_specific": {} 00:18:04.809 } 00:18:04.809 ] 00:18:04.809 10:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:04.809 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:04.809 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:04.809 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:05.067 BaseBdev3 00:18:05.067 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:05.067 10:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:05.067 10:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:05.067 10:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:05.067 10:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:05.067 10:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:05.067 10:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:05.325 10:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:05.584 [ 00:18:05.584 { 00:18:05.584 "name": "BaseBdev3", 00:18:05.584 "aliases": [ 00:18:05.584 "d48fdfbb-b804-4990-a2b9-8e4e8651b38f" 00:18:05.584 ], 00:18:05.584 "product_name": "Malloc disk", 00:18:05.584 "block_size": 512, 00:18:05.584 "num_blocks": 65536, 00:18:05.584 "uuid": "d48fdfbb-b804-4990-a2b9-8e4e8651b38f", 00:18:05.584 "assigned_rate_limits": { 00:18:05.584 "rw_ios_per_sec": 0, 00:18:05.584 "rw_mbytes_per_sec": 0, 00:18:05.584 "r_mbytes_per_sec": 0, 00:18:05.584 "w_mbytes_per_sec": 0 00:18:05.584 }, 00:18:05.584 "claimed": false, 00:18:05.584 "zoned": false, 00:18:05.584 "supported_io_types": { 00:18:05.584 "read": true, 00:18:05.584 "write": true, 00:18:05.584 "unmap": true, 00:18:05.584 "flush": true, 00:18:05.584 "reset": true, 00:18:05.584 "nvme_admin": false, 00:18:05.584 "nvme_io": false, 00:18:05.584 "nvme_io_md": false, 00:18:05.584 "write_zeroes": true, 00:18:05.584 "zcopy": true, 00:18:05.584 "get_zone_info": false, 00:18:05.584 "zone_management": false, 00:18:05.584 "zone_append": false, 00:18:05.584 "compare": false, 00:18:05.584 "compare_and_write": false, 00:18:05.584 "abort": true, 00:18:05.584 "seek_hole": false, 00:18:05.584 "seek_data": false, 00:18:05.584 "copy": true, 00:18:05.584 "nvme_iov_md": false 00:18:05.584 }, 00:18:05.584 "memory_domains": [ 00:18:05.584 { 00:18:05.584 "dma_device_id": "system", 00:18:05.584 "dma_device_type": 1 00:18:05.584 }, 00:18:05.584 { 00:18:05.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.584 "dma_device_type": 2 00:18:05.584 } 00:18:05.584 ], 00:18:05.584 "driver_specific": {} 00:18:05.584 } 00:18:05.584 ] 00:18:05.584 10:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:05.584 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:05.584 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:05.584 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:05.843 BaseBdev4 00:18:05.843 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:18:05.843 10:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:05.843 10:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:05.843 10:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:05.843 10:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:05.843 10:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:05.843 10:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.100 10:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:06.358 [ 00:18:06.358 { 00:18:06.358 "name": "BaseBdev4", 00:18:06.358 "aliases": [ 00:18:06.358 "f025beab-d704-493c-81a1-609bc0ed1b1e" 00:18:06.359 ], 00:18:06.359 "product_name": "Malloc disk", 00:18:06.359 "block_size": 512, 00:18:06.359 "num_blocks": 65536, 00:18:06.359 "uuid": "f025beab-d704-493c-81a1-609bc0ed1b1e", 00:18:06.359 "assigned_rate_limits": { 00:18:06.359 "rw_ios_per_sec": 0, 00:18:06.359 "rw_mbytes_per_sec": 0, 00:18:06.359 "r_mbytes_per_sec": 0, 00:18:06.359 "w_mbytes_per_sec": 0 00:18:06.359 }, 00:18:06.359 "claimed": false, 00:18:06.359 "zoned": false, 00:18:06.359 "supported_io_types": { 00:18:06.359 "read": true, 00:18:06.359 "write": true, 00:18:06.359 "unmap": true, 00:18:06.359 "flush": true, 00:18:06.359 "reset": true, 00:18:06.359 "nvme_admin": false, 00:18:06.359 "nvme_io": false, 00:18:06.359 "nvme_io_md": false, 00:18:06.359 "write_zeroes": true, 00:18:06.359 "zcopy": true, 00:18:06.359 "get_zone_info": false, 00:18:06.359 "zone_management": false, 00:18:06.359 "zone_append": false, 00:18:06.359 "compare": false, 00:18:06.359 "compare_and_write": false, 00:18:06.359 "abort": true, 00:18:06.359 "seek_hole": false, 00:18:06.359 "seek_data": false, 00:18:06.359 "copy": true, 00:18:06.359 "nvme_iov_md": false 00:18:06.359 }, 00:18:06.359 "memory_domains": [ 00:18:06.359 { 00:18:06.359 "dma_device_id": "system", 00:18:06.359 "dma_device_type": 1 00:18:06.359 }, 00:18:06.359 { 00:18:06.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.359 "dma_device_type": 2 00:18:06.359 } 00:18:06.359 ], 00:18:06.359 "driver_specific": {} 00:18:06.359 } 00:18:06.359 ] 00:18:06.359 10:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:06.359 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:06.359 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:06.359 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:06.617 [2024-07-25 10:33:10.236445] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:06.617 [2024-07-25 10:33:10.236511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:06.617 [2024-07-25 10:33:10.236543] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.617 [2024-07-25 10:33:10.237826] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:06.617 [2024-07-25 10:33:10.237869] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:06.617 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:06.617 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:06.617 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:06.617 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:06.617 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:06.617 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:06.617 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:06.617 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:06.617 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:06.617 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:06.617 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.617 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.875 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.875 "name": "Existed_Raid", 00:18:06.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.875 "strip_size_kb": 64, 00:18:06.875 "state": "configuring", 00:18:06.875 "raid_level": "concat", 00:18:06.875 "superblock": false, 00:18:06.875 "num_base_bdevs": 4, 00:18:06.875 "num_base_bdevs_discovered": 3, 00:18:06.875 "num_base_bdevs_operational": 4, 00:18:06.875 "base_bdevs_list": [ 00:18:06.875 { 00:18:06.875 "name": "BaseBdev1", 00:18:06.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.875 "is_configured": false, 00:18:06.875 "data_offset": 0, 00:18:06.875 "data_size": 0 00:18:06.875 }, 00:18:06.875 { 00:18:06.875 "name": "BaseBdev2", 00:18:06.875 "uuid": "86650403-7604-45b8-8e13-e5cb1e1f29dc", 00:18:06.875 "is_configured": true, 00:18:06.875 "data_offset": 0, 00:18:06.875 "data_size": 65536 00:18:06.875 }, 00:18:06.875 { 00:18:06.875 "name": "BaseBdev3", 00:18:06.875 "uuid": "d48fdfbb-b804-4990-a2b9-8e4e8651b38f", 00:18:06.875 "is_configured": true, 00:18:06.875 "data_offset": 0, 00:18:06.875 "data_size": 65536 00:18:06.875 }, 00:18:06.875 { 00:18:06.875 "name": "BaseBdev4", 00:18:06.875 "uuid": "f025beab-d704-493c-81a1-609bc0ed1b1e", 00:18:06.875 "is_configured": true, 00:18:06.875 "data_offset": 0, 00:18:06.875 "data_size": 65536 00:18:06.875 } 00:18:06.875 ] 00:18:06.875 }' 00:18:06.875 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.875 10:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.441 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:07.699 [2024-07-25 10:33:11.275186] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:07.699 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:07.699 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:07.699 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:07.699 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:07.699 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:07.699 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:07.699 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.699 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.699 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.699 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.699 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.699 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.955 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.955 "name": "Existed_Raid", 00:18:07.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.955 "strip_size_kb": 64, 00:18:07.955 "state": "configuring", 00:18:07.955 "raid_level": "concat", 00:18:07.955 "superblock": false, 00:18:07.955 "num_base_bdevs": 4, 00:18:07.955 "num_base_bdevs_discovered": 2, 00:18:07.955 "num_base_bdevs_operational": 4, 00:18:07.955 "base_bdevs_list": [ 00:18:07.955 { 00:18:07.955 "name": "BaseBdev1", 00:18:07.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.955 "is_configured": false, 00:18:07.955 "data_offset": 0, 00:18:07.955 "data_size": 0 00:18:07.955 }, 00:18:07.955 { 00:18:07.955 "name": null, 00:18:07.955 "uuid": "86650403-7604-45b8-8e13-e5cb1e1f29dc", 00:18:07.955 "is_configured": false, 00:18:07.955 "data_offset": 0, 00:18:07.955 "data_size": 65536 00:18:07.955 }, 00:18:07.955 { 00:18:07.955 "name": "BaseBdev3", 00:18:07.955 "uuid": "d48fdfbb-b804-4990-a2b9-8e4e8651b38f", 00:18:07.955 "is_configured": true, 00:18:07.955 "data_offset": 0, 00:18:07.955 "data_size": 65536 00:18:07.955 }, 00:18:07.955 { 00:18:07.955 "name": "BaseBdev4", 00:18:07.955 "uuid": "f025beab-d704-493c-81a1-609bc0ed1b1e", 00:18:07.955 "is_configured": true, 00:18:07.955 "data_offset": 0, 00:18:07.955 "data_size": 65536 00:18:07.955 } 00:18:07.955 ] 00:18:07.955 }' 00:18:07.955 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.955 10:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.519 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.519 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:08.777 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:08.777 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:09.035 [2024-07-25 10:33:12.632960] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.035 BaseBdev1 00:18:09.035 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:09.035 10:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:09.035 10:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:09.035 10:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:09.035 10:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:09.035 10:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:09.035 10:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:09.294 10:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:09.552 [ 00:18:09.552 { 00:18:09.552 "name": "BaseBdev1", 00:18:09.552 "aliases": [ 00:18:09.552 "f95960de-bcd8-4d2f-a38d-5cec486bfad4" 00:18:09.552 ], 00:18:09.552 "product_name": "Malloc disk", 00:18:09.552 "block_size": 512, 00:18:09.552 "num_blocks": 65536, 00:18:09.552 "uuid": "f95960de-bcd8-4d2f-a38d-5cec486bfad4", 00:18:09.552 "assigned_rate_limits": { 00:18:09.552 "rw_ios_per_sec": 0, 00:18:09.552 "rw_mbytes_per_sec": 0, 00:18:09.552 "r_mbytes_per_sec": 0, 00:18:09.552 "w_mbytes_per_sec": 0 00:18:09.552 }, 00:18:09.552 "claimed": true, 00:18:09.552 "claim_type": "exclusive_write", 00:18:09.552 "zoned": false, 00:18:09.552 "supported_io_types": { 00:18:09.552 "read": true, 00:18:09.552 "write": true, 00:18:09.552 "unmap": true, 00:18:09.552 "flush": true, 00:18:09.552 "reset": true, 00:18:09.552 "nvme_admin": false, 00:18:09.552 "nvme_io": false, 00:18:09.552 "nvme_io_md": false, 00:18:09.552 "write_zeroes": true, 00:18:09.552 "zcopy": true, 00:18:09.552 "get_zone_info": false, 00:18:09.552 "zone_management": false, 00:18:09.552 "zone_append": false, 00:18:09.552 "compare": false, 00:18:09.552 "compare_and_write": false, 00:18:09.552 "abort": true, 00:18:09.552 "seek_hole": false, 00:18:09.552 "seek_data": false, 00:18:09.552 "copy": true, 00:18:09.552 "nvme_iov_md": false 00:18:09.552 }, 00:18:09.552 "memory_domains": [ 00:18:09.552 { 00:18:09.552 "dma_device_id": "system", 00:18:09.552 "dma_device_type": 1 00:18:09.552 }, 00:18:09.552 { 00:18:09.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.552 "dma_device_type": 2 00:18:09.552 } 00:18:09.552 ], 00:18:09.552 "driver_specific": {} 00:18:09.552 } 00:18:09.552 ] 00:18:09.552 10:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:09.552 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:09.552 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:09.552 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:09.552 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:09.552 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:09.552 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:09.552 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.552 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.552 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.552 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.552 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.552 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.811 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.811 "name": "Existed_Raid", 00:18:09.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.811 "strip_size_kb": 64, 00:18:09.811 "state": "configuring", 00:18:09.811 "raid_level": "concat", 00:18:09.811 "superblock": false, 00:18:09.811 "num_base_bdevs": 4, 00:18:09.811 "num_base_bdevs_discovered": 3, 00:18:09.811 "num_base_bdevs_operational": 4, 00:18:09.811 "base_bdevs_list": [ 00:18:09.811 { 00:18:09.811 "name": "BaseBdev1", 00:18:09.811 "uuid": "f95960de-bcd8-4d2f-a38d-5cec486bfad4", 00:18:09.811 "is_configured": true, 00:18:09.811 "data_offset": 0, 00:18:09.811 "data_size": 65536 00:18:09.811 }, 00:18:09.811 { 00:18:09.811 "name": null, 00:18:09.811 "uuid": "86650403-7604-45b8-8e13-e5cb1e1f29dc", 00:18:09.811 "is_configured": false, 00:18:09.811 "data_offset": 0, 00:18:09.811 "data_size": 65536 00:18:09.811 }, 00:18:09.811 { 00:18:09.811 "name": "BaseBdev3", 00:18:09.811 "uuid": "d48fdfbb-b804-4990-a2b9-8e4e8651b38f", 00:18:09.811 "is_configured": true, 00:18:09.811 "data_offset": 0, 00:18:09.811 "data_size": 65536 00:18:09.811 }, 00:18:09.811 { 00:18:09.811 "name": "BaseBdev4", 00:18:09.811 "uuid": "f025beab-d704-493c-81a1-609bc0ed1b1e", 00:18:09.811 "is_configured": true, 00:18:09.811 "data_offset": 0, 00:18:09.811 "data_size": 65536 00:18:09.811 } 00:18:09.811 ] 00:18:09.811 }' 00:18:09.811 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.811 10:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.376 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.376 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:10.634 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:10.634 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:10.891 [2024-07-25 10:33:14.562142] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:10.891 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:10.891 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:10.891 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:10.891 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:10.891 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:10.891 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:10.891 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.891 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.891 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.892 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.892 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.892 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.149 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.149 "name": "Existed_Raid", 00:18:11.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.149 "strip_size_kb": 64, 00:18:11.149 "state": "configuring", 00:18:11.149 "raid_level": "concat", 00:18:11.149 "superblock": false, 00:18:11.149 "num_base_bdevs": 4, 00:18:11.149 "num_base_bdevs_discovered": 2, 00:18:11.150 "num_base_bdevs_operational": 4, 00:18:11.150 "base_bdevs_list": [ 00:18:11.150 { 00:18:11.150 "name": "BaseBdev1", 00:18:11.150 "uuid": "f95960de-bcd8-4d2f-a38d-5cec486bfad4", 00:18:11.150 "is_configured": true, 00:18:11.150 "data_offset": 0, 00:18:11.150 "data_size": 65536 00:18:11.150 }, 00:18:11.150 { 00:18:11.150 "name": null, 00:18:11.150 "uuid": "86650403-7604-45b8-8e13-e5cb1e1f29dc", 00:18:11.150 "is_configured": false, 00:18:11.150 "data_offset": 0, 00:18:11.150 "data_size": 65536 00:18:11.150 }, 00:18:11.150 { 00:18:11.150 "name": null, 00:18:11.150 "uuid": "d48fdfbb-b804-4990-a2b9-8e4e8651b38f", 00:18:11.150 "is_configured": false, 00:18:11.150 "data_offset": 0, 00:18:11.150 "data_size": 65536 00:18:11.150 }, 00:18:11.150 { 00:18:11.150 "name": "BaseBdev4", 00:18:11.150 "uuid": "f025beab-d704-493c-81a1-609bc0ed1b1e", 00:18:11.150 "is_configured": true, 00:18:11.150 "data_offset": 0, 00:18:11.150 "data_size": 65536 00:18:11.150 } 00:18:11.150 ] 00:18:11.150 }' 00:18:11.150 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.150 10:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.715 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.715 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:11.974 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:11.974 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:12.232 [2024-07-25 10:33:15.869666] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:12.232 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:12.232 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.232 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:12.232 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:12.232 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:12.232 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:12.232 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.232 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.232 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.232 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.232 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.232 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.490 10:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.490 "name": "Existed_Raid", 00:18:12.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.490 "strip_size_kb": 64, 00:18:12.490 "state": "configuring", 00:18:12.490 "raid_level": "concat", 00:18:12.490 "superblock": false, 00:18:12.490 "num_base_bdevs": 4, 00:18:12.490 "num_base_bdevs_discovered": 3, 00:18:12.490 "num_base_bdevs_operational": 4, 00:18:12.490 "base_bdevs_list": [ 00:18:12.490 { 00:18:12.490 "name": "BaseBdev1", 00:18:12.490 "uuid": "f95960de-bcd8-4d2f-a38d-5cec486bfad4", 00:18:12.490 "is_configured": true, 00:18:12.490 "data_offset": 0, 00:18:12.490 "data_size": 65536 00:18:12.490 }, 00:18:12.490 { 00:18:12.490 "name": null, 00:18:12.490 "uuid": "86650403-7604-45b8-8e13-e5cb1e1f29dc", 00:18:12.490 "is_configured": false, 00:18:12.490 "data_offset": 0, 00:18:12.490 "data_size": 65536 00:18:12.490 }, 00:18:12.490 { 00:18:12.490 "name": "BaseBdev3", 00:18:12.490 "uuid": "d48fdfbb-b804-4990-a2b9-8e4e8651b38f", 00:18:12.490 "is_configured": true, 00:18:12.490 "data_offset": 0, 00:18:12.490 "data_size": 65536 00:18:12.490 }, 00:18:12.490 { 00:18:12.490 "name": "BaseBdev4", 00:18:12.490 "uuid": "f025beab-d704-493c-81a1-609bc0ed1b1e", 00:18:12.490 "is_configured": true, 00:18:12.490 "data_offset": 0, 00:18:12.490 "data_size": 65536 00:18:12.490 } 00:18:12.490 ] 00:18:12.490 }' 00:18:12.490 10:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.490 10:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.092 10:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:13.093 10:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.351 10:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:13.351 10:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:13.609 [2024-07-25 10:33:17.149079] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:13.609 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:13.609 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:13.609 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:13.609 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:13.609 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:13.609 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:13.609 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.609 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.609 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.609 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.609 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.609 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.867 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.867 "name": "Existed_Raid", 00:18:13.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.867 "strip_size_kb": 64, 00:18:13.867 "state": "configuring", 00:18:13.867 "raid_level": "concat", 00:18:13.867 "superblock": false, 00:18:13.867 "num_base_bdevs": 4, 00:18:13.867 "num_base_bdevs_discovered": 2, 00:18:13.867 "num_base_bdevs_operational": 4, 00:18:13.867 "base_bdevs_list": [ 00:18:13.867 { 00:18:13.867 "name": null, 00:18:13.867 "uuid": "f95960de-bcd8-4d2f-a38d-5cec486bfad4", 00:18:13.867 "is_configured": false, 00:18:13.867 "data_offset": 0, 00:18:13.867 "data_size": 65536 00:18:13.867 }, 00:18:13.867 { 00:18:13.867 "name": null, 00:18:13.867 "uuid": "86650403-7604-45b8-8e13-e5cb1e1f29dc", 00:18:13.867 "is_configured": false, 00:18:13.867 "data_offset": 0, 00:18:13.867 "data_size": 65536 00:18:13.867 }, 00:18:13.867 { 00:18:13.867 "name": "BaseBdev3", 00:18:13.867 "uuid": "d48fdfbb-b804-4990-a2b9-8e4e8651b38f", 00:18:13.867 "is_configured": true, 00:18:13.867 "data_offset": 0, 00:18:13.867 "data_size": 65536 00:18:13.867 }, 00:18:13.867 { 00:18:13.867 "name": "BaseBdev4", 00:18:13.867 "uuid": "f025beab-d704-493c-81a1-609bc0ed1b1e", 00:18:13.867 "is_configured": true, 00:18:13.867 "data_offset": 0, 00:18:13.867 "data_size": 65536 00:18:13.867 } 00:18:13.867 ] 00:18:13.867 }' 00:18:13.867 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.867 10:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.432 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.432 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:14.689 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:14.689 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:14.689 [2024-07-25 10:33:18.389752] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.946 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:14.946 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:14.946 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:14.946 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:14.946 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:14.946 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:14.946 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.947 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.947 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.947 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.947 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.947 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.947 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.947 "name": "Existed_Raid", 00:18:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.947 "strip_size_kb": 64, 00:18:14.947 "state": "configuring", 00:18:14.947 "raid_level": "concat", 00:18:14.947 "superblock": false, 00:18:14.947 "num_base_bdevs": 4, 00:18:14.947 "num_base_bdevs_discovered": 3, 00:18:14.947 "num_base_bdevs_operational": 4, 00:18:14.947 "base_bdevs_list": [ 00:18:14.947 { 00:18:14.947 "name": null, 00:18:14.947 "uuid": "f95960de-bcd8-4d2f-a38d-5cec486bfad4", 00:18:14.947 "is_configured": false, 00:18:14.947 "data_offset": 0, 00:18:14.947 "data_size": 65536 00:18:14.947 }, 00:18:14.947 { 00:18:14.947 "name": "BaseBdev2", 00:18:14.947 "uuid": "86650403-7604-45b8-8e13-e5cb1e1f29dc", 00:18:14.947 "is_configured": true, 00:18:14.947 "data_offset": 0, 00:18:14.947 "data_size": 65536 00:18:14.947 }, 00:18:14.947 { 00:18:14.947 "name": "BaseBdev3", 00:18:14.947 "uuid": "d48fdfbb-b804-4990-a2b9-8e4e8651b38f", 00:18:14.947 "is_configured": true, 00:18:14.947 "data_offset": 0, 00:18:14.947 "data_size": 65536 00:18:14.947 }, 00:18:14.947 { 00:18:14.947 "name": "BaseBdev4", 00:18:14.947 "uuid": "f025beab-d704-493c-81a1-609bc0ed1b1e", 00:18:14.947 "is_configured": true, 00:18:14.947 "data_offset": 0, 00:18:14.947 "data_size": 65536 00:18:14.947 } 00:18:14.947 ] 00:18:14.947 }' 00:18:14.947 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.947 10:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.511 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.511 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:15.769 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:15.769 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.769 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:16.027 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f95960de-bcd8-4d2f-a38d-5cec486bfad4 00:18:16.286 [2024-07-25 10:33:19.920443] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:16.286 [2024-07-25 10:33:19.920502] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bf4ac0 00:18:16.286 [2024-07-25 10:33:19.920512] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:16.286 [2024-07-25 10:33:19.920736] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bf4f90 00:18:16.286 [2024-07-25 10:33:19.920887] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bf4ac0 00:18:16.286 [2024-07-25 10:33:19.920903] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1bf4ac0 00:18:16.286 [2024-07-25 10:33:19.921148] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.286 NewBaseBdev 00:18:16.286 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:16.286 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:16.286 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:16.286 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:16.286 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:16.286 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:16.286 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:16.545 10:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:16.803 [ 00:18:16.803 { 00:18:16.803 "name": "NewBaseBdev", 00:18:16.803 "aliases": [ 00:18:16.803 "f95960de-bcd8-4d2f-a38d-5cec486bfad4" 00:18:16.803 ], 00:18:16.803 "product_name": "Malloc disk", 00:18:16.803 "block_size": 512, 00:18:16.803 "num_blocks": 65536, 00:18:16.803 "uuid": "f95960de-bcd8-4d2f-a38d-5cec486bfad4", 00:18:16.803 "assigned_rate_limits": { 00:18:16.803 "rw_ios_per_sec": 0, 00:18:16.803 "rw_mbytes_per_sec": 0, 00:18:16.803 "r_mbytes_per_sec": 0, 00:18:16.803 "w_mbytes_per_sec": 0 00:18:16.803 }, 00:18:16.803 "claimed": true, 00:18:16.803 "claim_type": "exclusive_write", 00:18:16.803 "zoned": false, 00:18:16.803 "supported_io_types": { 00:18:16.803 "read": true, 00:18:16.803 "write": true, 00:18:16.803 "unmap": true, 00:18:16.803 "flush": true, 00:18:16.803 "reset": true, 00:18:16.803 "nvme_admin": false, 00:18:16.803 "nvme_io": false, 00:18:16.803 "nvme_io_md": false, 00:18:16.803 "write_zeroes": true, 00:18:16.803 "zcopy": true, 00:18:16.803 "get_zone_info": false, 00:18:16.803 "zone_management": false, 00:18:16.803 "zone_append": false, 00:18:16.803 "compare": false, 00:18:16.803 "compare_and_write": false, 00:18:16.803 "abort": true, 00:18:16.803 "seek_hole": false, 00:18:16.803 "seek_data": false, 00:18:16.803 "copy": true, 00:18:16.803 "nvme_iov_md": false 00:18:16.803 }, 00:18:16.803 "memory_domains": [ 00:18:16.803 { 00:18:16.803 "dma_device_id": "system", 00:18:16.803 "dma_device_type": 1 00:18:16.803 }, 00:18:16.803 { 00:18:16.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.803 "dma_device_type": 2 00:18:16.803 } 00:18:16.803 ], 00:18:16.803 "driver_specific": {} 00:18:16.803 } 00:18:16.803 ] 00:18:16.803 10:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:16.803 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:16.803 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:16.803 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:16.803 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:16.803 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:16.803 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:16.803 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.803 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.803 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.803 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.803 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.803 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.061 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.061 "name": "Existed_Raid", 00:18:17.061 "uuid": "ba2a54ec-389f-4f52-a889-7205a44b402a", 00:18:17.061 "strip_size_kb": 64, 00:18:17.061 "state": "online", 00:18:17.061 "raid_level": "concat", 00:18:17.061 "superblock": false, 00:18:17.061 "num_base_bdevs": 4, 00:18:17.061 "num_base_bdevs_discovered": 4, 00:18:17.061 "num_base_bdevs_operational": 4, 00:18:17.061 "base_bdevs_list": [ 00:18:17.061 { 00:18:17.061 "name": "NewBaseBdev", 00:18:17.061 "uuid": "f95960de-bcd8-4d2f-a38d-5cec486bfad4", 00:18:17.061 "is_configured": true, 00:18:17.061 "data_offset": 0, 00:18:17.061 "data_size": 65536 00:18:17.061 }, 00:18:17.061 { 00:18:17.061 "name": "BaseBdev2", 00:18:17.062 "uuid": "86650403-7604-45b8-8e13-e5cb1e1f29dc", 00:18:17.062 "is_configured": true, 00:18:17.062 "data_offset": 0, 00:18:17.062 "data_size": 65536 00:18:17.062 }, 00:18:17.062 { 00:18:17.062 "name": "BaseBdev3", 00:18:17.062 "uuid": "d48fdfbb-b804-4990-a2b9-8e4e8651b38f", 00:18:17.062 "is_configured": true, 00:18:17.062 "data_offset": 0, 00:18:17.062 "data_size": 65536 00:18:17.062 }, 00:18:17.062 { 00:18:17.062 "name": "BaseBdev4", 00:18:17.062 "uuid": "f025beab-d704-493c-81a1-609bc0ed1b1e", 00:18:17.062 "is_configured": true, 00:18:17.062 "data_offset": 0, 00:18:17.062 "data_size": 65536 00:18:17.062 } 00:18:17.062 ] 00:18:17.062 }' 00:18:17.062 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.062 10:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.627 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:17.627 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:17.627 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:17.627 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:17.627 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:17.627 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:17.627 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:17.627 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:17.886 [2024-07-25 10:33:21.456868] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.886 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:17.886 "name": "Existed_Raid", 00:18:17.886 "aliases": [ 00:18:17.886 "ba2a54ec-389f-4f52-a889-7205a44b402a" 00:18:17.886 ], 00:18:17.886 "product_name": "Raid Volume", 00:18:17.886 "block_size": 512, 00:18:17.886 "num_blocks": 262144, 00:18:17.886 "uuid": "ba2a54ec-389f-4f52-a889-7205a44b402a", 00:18:17.886 "assigned_rate_limits": { 00:18:17.886 "rw_ios_per_sec": 0, 00:18:17.886 "rw_mbytes_per_sec": 0, 00:18:17.886 "r_mbytes_per_sec": 0, 00:18:17.886 "w_mbytes_per_sec": 0 00:18:17.886 }, 00:18:17.886 "claimed": false, 00:18:17.886 "zoned": false, 00:18:17.886 "supported_io_types": { 00:18:17.886 "read": true, 00:18:17.886 "write": true, 00:18:17.886 "unmap": true, 00:18:17.886 "flush": true, 00:18:17.886 "reset": true, 00:18:17.886 "nvme_admin": false, 00:18:17.886 "nvme_io": false, 00:18:17.886 "nvme_io_md": false, 00:18:17.886 "write_zeroes": true, 00:18:17.886 "zcopy": false, 00:18:17.886 "get_zone_info": false, 00:18:17.886 "zone_management": false, 00:18:17.886 "zone_append": false, 00:18:17.886 "compare": false, 00:18:17.886 "compare_and_write": false, 00:18:17.886 "abort": false, 00:18:17.886 "seek_hole": false, 00:18:17.886 "seek_data": false, 00:18:17.886 "copy": false, 00:18:17.886 "nvme_iov_md": false 00:18:17.886 }, 00:18:17.886 "memory_domains": [ 00:18:17.886 { 00:18:17.886 "dma_device_id": "system", 00:18:17.886 "dma_device_type": 1 00:18:17.886 }, 00:18:17.886 { 00:18:17.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.886 "dma_device_type": 2 00:18:17.886 }, 00:18:17.886 { 00:18:17.886 "dma_device_id": "system", 00:18:17.886 "dma_device_type": 1 00:18:17.886 }, 00:18:17.886 { 00:18:17.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.886 "dma_device_type": 2 00:18:17.886 }, 00:18:17.886 { 00:18:17.886 "dma_device_id": "system", 00:18:17.886 "dma_device_type": 1 00:18:17.886 }, 00:18:17.886 { 00:18:17.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.886 "dma_device_type": 2 00:18:17.886 }, 00:18:17.886 { 00:18:17.886 "dma_device_id": "system", 00:18:17.886 "dma_device_type": 1 00:18:17.886 }, 00:18:17.886 { 00:18:17.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.886 "dma_device_type": 2 00:18:17.886 } 00:18:17.886 ], 00:18:17.886 "driver_specific": { 00:18:17.886 "raid": { 00:18:17.886 "uuid": "ba2a54ec-389f-4f52-a889-7205a44b402a", 00:18:17.886 "strip_size_kb": 64, 00:18:17.886 "state": "online", 00:18:17.886 "raid_level": "concat", 00:18:17.886 "superblock": false, 00:18:17.886 "num_base_bdevs": 4, 00:18:17.886 "num_base_bdevs_discovered": 4, 00:18:17.886 "num_base_bdevs_operational": 4, 00:18:17.886 "base_bdevs_list": [ 00:18:17.886 { 00:18:17.886 "name": "NewBaseBdev", 00:18:17.886 "uuid": "f95960de-bcd8-4d2f-a38d-5cec486bfad4", 00:18:17.886 "is_configured": true, 00:18:17.886 "data_offset": 0, 00:18:17.886 "data_size": 65536 00:18:17.886 }, 00:18:17.886 { 00:18:17.886 "name": "BaseBdev2", 00:18:17.886 "uuid": "86650403-7604-45b8-8e13-e5cb1e1f29dc", 00:18:17.886 "is_configured": true, 00:18:17.886 "data_offset": 0, 00:18:17.886 "data_size": 65536 00:18:17.886 }, 00:18:17.886 { 00:18:17.886 "name": "BaseBdev3", 00:18:17.886 "uuid": "d48fdfbb-b804-4990-a2b9-8e4e8651b38f", 00:18:17.886 "is_configured": true, 00:18:17.886 "data_offset": 0, 00:18:17.886 "data_size": 65536 00:18:17.886 }, 00:18:17.886 { 00:18:17.886 "name": "BaseBdev4", 00:18:17.886 "uuid": "f025beab-d704-493c-81a1-609bc0ed1b1e", 00:18:17.886 "is_configured": true, 00:18:17.886 "data_offset": 0, 00:18:17.886 "data_size": 65536 00:18:17.886 } 00:18:17.886 ] 00:18:17.886 } 00:18:17.886 } 00:18:17.886 }' 00:18:17.886 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:17.886 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:17.886 BaseBdev2 00:18:17.886 BaseBdev3 00:18:17.886 BaseBdev4' 00:18:17.886 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:17.886 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:17.886 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:18.143 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:18.143 "name": "NewBaseBdev", 00:18:18.143 "aliases": [ 00:18:18.143 "f95960de-bcd8-4d2f-a38d-5cec486bfad4" 00:18:18.143 ], 00:18:18.143 "product_name": "Malloc disk", 00:18:18.143 "block_size": 512, 00:18:18.143 "num_blocks": 65536, 00:18:18.143 "uuid": "f95960de-bcd8-4d2f-a38d-5cec486bfad4", 00:18:18.143 "assigned_rate_limits": { 00:18:18.143 "rw_ios_per_sec": 0, 00:18:18.143 "rw_mbytes_per_sec": 0, 00:18:18.143 "r_mbytes_per_sec": 0, 00:18:18.143 "w_mbytes_per_sec": 0 00:18:18.143 }, 00:18:18.143 "claimed": true, 00:18:18.143 "claim_type": "exclusive_write", 00:18:18.143 "zoned": false, 00:18:18.143 "supported_io_types": { 00:18:18.143 "read": true, 00:18:18.143 "write": true, 00:18:18.144 "unmap": true, 00:18:18.144 "flush": true, 00:18:18.144 "reset": true, 00:18:18.144 "nvme_admin": false, 00:18:18.144 "nvme_io": false, 00:18:18.144 "nvme_io_md": false, 00:18:18.144 "write_zeroes": true, 00:18:18.144 "zcopy": true, 00:18:18.144 "get_zone_info": false, 00:18:18.144 "zone_management": false, 00:18:18.144 "zone_append": false, 00:18:18.144 "compare": false, 00:18:18.144 "compare_and_write": false, 00:18:18.144 "abort": true, 00:18:18.144 "seek_hole": false, 00:18:18.144 "seek_data": false, 00:18:18.144 "copy": true, 00:18:18.144 "nvme_iov_md": false 00:18:18.144 }, 00:18:18.144 "memory_domains": [ 00:18:18.144 { 00:18:18.144 "dma_device_id": "system", 00:18:18.144 "dma_device_type": 1 00:18:18.144 }, 00:18:18.144 { 00:18:18.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.144 "dma_device_type": 2 00:18:18.144 } 00:18:18.144 ], 00:18:18.144 "driver_specific": {} 00:18:18.144 }' 00:18:18.144 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.144 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.144 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:18.144 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.401 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.401 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:18.401 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.401 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.401 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:18.401 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.401 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.401 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:18.402 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:18.402 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:18.402 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:18.660 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:18.660 "name": "BaseBdev2", 00:18:18.660 "aliases": [ 00:18:18.660 "86650403-7604-45b8-8e13-e5cb1e1f29dc" 00:18:18.660 ], 00:18:18.660 "product_name": "Malloc disk", 00:18:18.660 "block_size": 512, 00:18:18.660 "num_blocks": 65536, 00:18:18.660 "uuid": "86650403-7604-45b8-8e13-e5cb1e1f29dc", 00:18:18.660 "assigned_rate_limits": { 00:18:18.660 "rw_ios_per_sec": 0, 00:18:18.660 "rw_mbytes_per_sec": 0, 00:18:18.660 "r_mbytes_per_sec": 0, 00:18:18.660 "w_mbytes_per_sec": 0 00:18:18.660 }, 00:18:18.660 "claimed": true, 00:18:18.660 "claim_type": "exclusive_write", 00:18:18.660 "zoned": false, 00:18:18.660 "supported_io_types": { 00:18:18.660 "read": true, 00:18:18.660 "write": true, 00:18:18.660 "unmap": true, 00:18:18.660 "flush": true, 00:18:18.660 "reset": true, 00:18:18.660 "nvme_admin": false, 00:18:18.660 "nvme_io": false, 00:18:18.660 "nvme_io_md": false, 00:18:18.660 "write_zeroes": true, 00:18:18.660 "zcopy": true, 00:18:18.660 "get_zone_info": false, 00:18:18.660 "zone_management": false, 00:18:18.660 "zone_append": false, 00:18:18.660 "compare": false, 00:18:18.660 "compare_and_write": false, 00:18:18.660 "abort": true, 00:18:18.660 "seek_hole": false, 00:18:18.660 "seek_data": false, 00:18:18.660 "copy": true, 00:18:18.660 "nvme_iov_md": false 00:18:18.660 }, 00:18:18.660 "memory_domains": [ 00:18:18.660 { 00:18:18.660 "dma_device_id": "system", 00:18:18.660 "dma_device_type": 1 00:18:18.660 }, 00:18:18.660 { 00:18:18.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.660 "dma_device_type": 2 00:18:18.660 } 00:18:18.660 ], 00:18:18.660 "driver_specific": {} 00:18:18.660 }' 00:18:18.660 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.660 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:18.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:18.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:18.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:18.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:18.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:18.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:19.176 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:19.176 "name": "BaseBdev3", 00:18:19.176 "aliases": [ 00:18:19.176 "d48fdfbb-b804-4990-a2b9-8e4e8651b38f" 00:18:19.176 ], 00:18:19.176 "product_name": "Malloc disk", 00:18:19.176 "block_size": 512, 00:18:19.176 "num_blocks": 65536, 00:18:19.176 "uuid": "d48fdfbb-b804-4990-a2b9-8e4e8651b38f", 00:18:19.176 "assigned_rate_limits": { 00:18:19.176 "rw_ios_per_sec": 0, 00:18:19.176 "rw_mbytes_per_sec": 0, 00:18:19.176 "r_mbytes_per_sec": 0, 00:18:19.176 "w_mbytes_per_sec": 0 00:18:19.176 }, 00:18:19.176 "claimed": true, 00:18:19.176 "claim_type": "exclusive_write", 00:18:19.176 "zoned": false, 00:18:19.176 "supported_io_types": { 00:18:19.176 "read": true, 00:18:19.176 "write": true, 00:18:19.176 "unmap": true, 00:18:19.176 "flush": true, 00:18:19.176 "reset": true, 00:18:19.176 "nvme_admin": false, 00:18:19.176 "nvme_io": false, 00:18:19.176 "nvme_io_md": false, 00:18:19.176 "write_zeroes": true, 00:18:19.176 "zcopy": true, 00:18:19.176 "get_zone_info": false, 00:18:19.176 "zone_management": false, 00:18:19.176 "zone_append": false, 00:18:19.176 "compare": false, 00:18:19.176 "compare_and_write": false, 00:18:19.176 "abort": true, 00:18:19.176 "seek_hole": false, 00:18:19.176 "seek_data": false, 00:18:19.176 "copy": true, 00:18:19.176 "nvme_iov_md": false 00:18:19.176 }, 00:18:19.176 "memory_domains": [ 00:18:19.176 { 00:18:19.176 "dma_device_id": "system", 00:18:19.176 "dma_device_type": 1 00:18:19.176 }, 00:18:19.176 { 00:18:19.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.176 "dma_device_type": 2 00:18:19.176 } 00:18:19.176 ], 00:18:19.176 "driver_specific": {} 00:18:19.176 }' 00:18:19.176 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.433 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.433 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:19.433 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.433 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.433 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:19.433 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.433 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.433 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.433 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.433 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.433 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:19.433 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:19.691 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:19.691 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:19.691 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:19.691 "name": "BaseBdev4", 00:18:19.691 "aliases": [ 00:18:19.691 "f025beab-d704-493c-81a1-609bc0ed1b1e" 00:18:19.691 ], 00:18:19.691 "product_name": "Malloc disk", 00:18:19.691 "block_size": 512, 00:18:19.691 "num_blocks": 65536, 00:18:19.691 "uuid": "f025beab-d704-493c-81a1-609bc0ed1b1e", 00:18:19.691 "assigned_rate_limits": { 00:18:19.691 "rw_ios_per_sec": 0, 00:18:19.691 "rw_mbytes_per_sec": 0, 00:18:19.691 "r_mbytes_per_sec": 0, 00:18:19.691 "w_mbytes_per_sec": 0 00:18:19.691 }, 00:18:19.691 "claimed": true, 00:18:19.691 "claim_type": "exclusive_write", 00:18:19.691 "zoned": false, 00:18:19.691 "supported_io_types": { 00:18:19.691 "read": true, 00:18:19.691 "write": true, 00:18:19.691 "unmap": true, 00:18:19.691 "flush": true, 00:18:19.691 "reset": true, 00:18:19.691 "nvme_admin": false, 00:18:19.691 "nvme_io": false, 00:18:19.691 "nvme_io_md": false, 00:18:19.691 "write_zeroes": true, 00:18:19.691 "zcopy": true, 00:18:19.691 "get_zone_info": false, 00:18:19.691 "zone_management": false, 00:18:19.691 "zone_append": false, 00:18:19.691 "compare": false, 00:18:19.691 "compare_and_write": false, 00:18:19.691 "abort": true, 00:18:19.691 "seek_hole": false, 00:18:19.691 "seek_data": false, 00:18:19.691 "copy": true, 00:18:19.691 "nvme_iov_md": false 00:18:19.691 }, 00:18:19.691 "memory_domains": [ 00:18:19.691 { 00:18:19.691 "dma_device_id": "system", 00:18:19.691 "dma_device_type": 1 00:18:19.691 }, 00:18:19.691 { 00:18:19.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.691 "dma_device_type": 2 00:18:19.691 } 00:18:19.691 ], 00:18:19.691 "driver_specific": {} 00:18:19.691 }' 00:18:19.691 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.948 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.948 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:19.948 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.948 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.948 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:19.948 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.948 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.948 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.948 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.948 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.207 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.207 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:20.207 [2024-07-25 10:33:23.907167] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:20.207 [2024-07-25 10:33:23.907201] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.207 [2024-07-25 10:33:23.907301] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.207 [2024-07-25 10:33:23.907395] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.207 [2024-07-25 10:33:23.907410] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bf4ac0 name Existed_Raid, state offline 00:18:20.464 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2401708 00:18:20.464 10:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2401708 ']' 00:18:20.464 10:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2401708 00:18:20.464 10:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:18:20.464 10:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:20.464 10:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2401708 00:18:20.464 10:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:20.464 10:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:20.464 10:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2401708' 00:18:20.464 killing process with pid 2401708 00:18:20.464 10:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2401708 00:18:20.464 [2024-07-25 10:33:23.953306] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.464 10:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2401708 00:18:20.464 [2024-07-25 10:33:24.003126] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:20.721 00:18:20.721 real 0m32.256s 00:18:20.721 user 1m0.056s 00:18:20.721 sys 0m4.498s 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.721 ************************************ 00:18:20.721 END TEST raid_state_function_test 00:18:20.721 ************************************ 00:18:20.721 10:33:24 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:18:20.721 10:33:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:20.721 10:33:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:20.721 10:33:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.721 ************************************ 00:18:20.721 START TEST raid_state_function_test_sb 00:18:20.721 ************************************ 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2406831 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2406831' 00:18:20.721 Process raid pid: 2406831 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2406831 /var/tmp/spdk-raid.sock 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2406831 ']' 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:20.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.721 10:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.721 [2024-07-25 10:33:24.378564] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:18:20.721 [2024-07-25 10:33:24.378634] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.978 [2024-07-25 10:33:24.453153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.978 [2024-07-25 10:33:24.563391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.978 [2024-07-25 10:33:24.637955] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.978 [2024-07-25 10:33:24.637994] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:21.908 10:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.908 10:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:21.908 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:22.164 [2024-07-25 10:33:25.626184] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:22.164 [2024-07-25 10:33:25.626224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:22.164 [2024-07-25 10:33:25.626251] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:22.164 [2024-07-25 10:33:25.626263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:22.164 [2024-07-25 10:33:25.626271] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:22.164 [2024-07-25 10:33:25.626287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:22.164 [2024-07-25 10:33:25.626296] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:22.164 [2024-07-25 10:33:25.626306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:22.164 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:22.164 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:22.164 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:22.164 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:22.164 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:22.164 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:22.164 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.164 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.165 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.165 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.165 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.165 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.422 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.422 "name": "Existed_Raid", 00:18:22.422 "uuid": "5163b3ea-35c6-437e-bf6a-1c1b273d51da", 00:18:22.422 "strip_size_kb": 64, 00:18:22.422 "state": "configuring", 00:18:22.422 "raid_level": "concat", 00:18:22.422 "superblock": true, 00:18:22.422 "num_base_bdevs": 4, 00:18:22.422 "num_base_bdevs_discovered": 0, 00:18:22.422 "num_base_bdevs_operational": 4, 00:18:22.422 "base_bdevs_list": [ 00:18:22.422 { 00:18:22.422 "name": "BaseBdev1", 00:18:22.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.422 "is_configured": false, 00:18:22.422 "data_offset": 0, 00:18:22.422 "data_size": 0 00:18:22.422 }, 00:18:22.422 { 00:18:22.422 "name": "BaseBdev2", 00:18:22.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.422 "is_configured": false, 00:18:22.422 "data_offset": 0, 00:18:22.422 "data_size": 0 00:18:22.422 }, 00:18:22.422 { 00:18:22.422 "name": "BaseBdev3", 00:18:22.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.422 "is_configured": false, 00:18:22.422 "data_offset": 0, 00:18:22.422 "data_size": 0 00:18:22.422 }, 00:18:22.422 { 00:18:22.422 "name": "BaseBdev4", 00:18:22.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.422 "is_configured": false, 00:18:22.422 "data_offset": 0, 00:18:22.422 "data_size": 0 00:18:22.422 } 00:18:22.422 ] 00:18:22.422 }' 00:18:22.422 10:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.422 10:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.985 10:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:22.985 [2024-07-25 10:33:26.656771] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:22.985 [2024-07-25 10:33:26.656808] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbd2640 name Existed_Raid, state configuring 00:18:22.985 10:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:23.242 [2024-07-25 10:33:26.905487] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:23.242 [2024-07-25 10:33:26.905536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:23.242 [2024-07-25 10:33:26.905547] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:23.242 [2024-07-25 10:33:26.905561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:23.242 [2024-07-25 10:33:26.905577] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:23.242 [2024-07-25 10:33:26.905594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:23.242 [2024-07-25 10:33:26.905603] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:23.242 [2024-07-25 10:33:26.905616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:23.242 10:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:23.500 [2024-07-25 10:33:27.170526] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.500 BaseBdev1 00:18:23.500 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:23.500 10:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:23.500 10:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:23.500 10:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:23.500 10:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:23.500 10:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:23.500 10:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:23.758 10:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:24.015 [ 00:18:24.015 { 00:18:24.015 "name": "BaseBdev1", 00:18:24.015 "aliases": [ 00:18:24.015 "1b0786ab-d3e2-4438-b0b2-0d3a15aa51c3" 00:18:24.015 ], 00:18:24.015 "product_name": "Malloc disk", 00:18:24.015 "block_size": 512, 00:18:24.015 "num_blocks": 65536, 00:18:24.015 "uuid": "1b0786ab-d3e2-4438-b0b2-0d3a15aa51c3", 00:18:24.015 "assigned_rate_limits": { 00:18:24.015 "rw_ios_per_sec": 0, 00:18:24.015 "rw_mbytes_per_sec": 0, 00:18:24.015 "r_mbytes_per_sec": 0, 00:18:24.015 "w_mbytes_per_sec": 0 00:18:24.015 }, 00:18:24.015 "claimed": true, 00:18:24.015 "claim_type": "exclusive_write", 00:18:24.015 "zoned": false, 00:18:24.015 "supported_io_types": { 00:18:24.015 "read": true, 00:18:24.015 "write": true, 00:18:24.015 "unmap": true, 00:18:24.015 "flush": true, 00:18:24.015 "reset": true, 00:18:24.015 "nvme_admin": false, 00:18:24.015 "nvme_io": false, 00:18:24.015 "nvme_io_md": false, 00:18:24.015 "write_zeroes": true, 00:18:24.015 "zcopy": true, 00:18:24.015 "get_zone_info": false, 00:18:24.015 "zone_management": false, 00:18:24.015 "zone_append": false, 00:18:24.015 "compare": false, 00:18:24.015 "compare_and_write": false, 00:18:24.015 "abort": true, 00:18:24.015 "seek_hole": false, 00:18:24.015 "seek_data": false, 00:18:24.015 "copy": true, 00:18:24.015 "nvme_iov_md": false 00:18:24.015 }, 00:18:24.015 "memory_domains": [ 00:18:24.015 { 00:18:24.015 "dma_device_id": "system", 00:18:24.015 "dma_device_type": 1 00:18:24.015 }, 00:18:24.015 { 00:18:24.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.015 "dma_device_type": 2 00:18:24.015 } 00:18:24.015 ], 00:18:24.015 "driver_specific": {} 00:18:24.015 } 00:18:24.015 ] 00:18:24.015 10:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:24.015 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:24.015 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:24.015 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:24.015 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:24.015 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:24.015 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:24.015 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.015 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.015 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.015 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.015 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.015 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.273 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.273 "name": "Existed_Raid", 00:18:24.273 "uuid": "7c3ce2b3-ea4a-4533-a715-4925f39bd906", 00:18:24.273 "strip_size_kb": 64, 00:18:24.273 "state": "configuring", 00:18:24.273 "raid_level": "concat", 00:18:24.273 "superblock": true, 00:18:24.273 "num_base_bdevs": 4, 00:18:24.273 "num_base_bdevs_discovered": 1, 00:18:24.273 "num_base_bdevs_operational": 4, 00:18:24.273 "base_bdevs_list": [ 00:18:24.273 { 00:18:24.273 "name": "BaseBdev1", 00:18:24.273 "uuid": "1b0786ab-d3e2-4438-b0b2-0d3a15aa51c3", 00:18:24.273 "is_configured": true, 00:18:24.273 "data_offset": 2048, 00:18:24.273 "data_size": 63488 00:18:24.273 }, 00:18:24.273 { 00:18:24.273 "name": "BaseBdev2", 00:18:24.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.273 "is_configured": false, 00:18:24.273 "data_offset": 0, 00:18:24.273 "data_size": 0 00:18:24.273 }, 00:18:24.273 { 00:18:24.273 "name": "BaseBdev3", 00:18:24.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.273 "is_configured": false, 00:18:24.273 "data_offset": 0, 00:18:24.273 "data_size": 0 00:18:24.273 }, 00:18:24.273 { 00:18:24.273 "name": "BaseBdev4", 00:18:24.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.273 "is_configured": false, 00:18:24.273 "data_offset": 0, 00:18:24.273 "data_size": 0 00:18:24.273 } 00:18:24.273 ] 00:18:24.273 }' 00:18:24.273 10:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.273 10:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.837 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:25.095 [2024-07-25 10:33:28.698563] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:25.095 [2024-07-25 10:33:28.698618] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbd1e50 name Existed_Raid, state configuring 00:18:25.095 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:25.353 [2024-07-25 10:33:28.943277] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.353 [2024-07-25 10:33:28.944812] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.353 [2024-07-25 10:33:28.944848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.353 [2024-07-25 10:33:28.944860] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:25.353 [2024-07-25 10:33:28.944874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:25.353 [2024-07-25 10:33:28.944884] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:25.353 [2024-07-25 10:33:28.944896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.353 10:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.612 10:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.612 "name": "Existed_Raid", 00:18:25.612 "uuid": "d96b919e-4c83-4543-9787-1b546f1b03a5", 00:18:25.612 "strip_size_kb": 64, 00:18:25.612 "state": "configuring", 00:18:25.612 "raid_level": "concat", 00:18:25.612 "superblock": true, 00:18:25.612 "num_base_bdevs": 4, 00:18:25.612 "num_base_bdevs_discovered": 1, 00:18:25.612 "num_base_bdevs_operational": 4, 00:18:25.612 "base_bdevs_list": [ 00:18:25.612 { 00:18:25.612 "name": "BaseBdev1", 00:18:25.612 "uuid": "1b0786ab-d3e2-4438-b0b2-0d3a15aa51c3", 00:18:25.612 "is_configured": true, 00:18:25.612 "data_offset": 2048, 00:18:25.612 "data_size": 63488 00:18:25.612 }, 00:18:25.612 { 00:18:25.612 "name": "BaseBdev2", 00:18:25.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.612 "is_configured": false, 00:18:25.612 "data_offset": 0, 00:18:25.612 "data_size": 0 00:18:25.612 }, 00:18:25.612 { 00:18:25.612 "name": "BaseBdev3", 00:18:25.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.612 "is_configured": false, 00:18:25.612 "data_offset": 0, 00:18:25.612 "data_size": 0 00:18:25.612 }, 00:18:25.612 { 00:18:25.612 "name": "BaseBdev4", 00:18:25.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.612 "is_configured": false, 00:18:25.612 "data_offset": 0, 00:18:25.612 "data_size": 0 00:18:25.612 } 00:18:25.612 ] 00:18:25.612 }' 00:18:25.612 10:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.612 10:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.176 10:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:26.434 [2024-07-25 10:33:30.016656] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.434 BaseBdev2 00:18:26.434 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:26.434 10:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:26.434 10:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:26.434 10:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:26.434 10:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:26.434 10:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:26.434 10:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:26.692 10:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:26.950 [ 00:18:26.950 { 00:18:26.950 "name": "BaseBdev2", 00:18:26.950 "aliases": [ 00:18:26.950 "593f3e39-7c7a-49bd-a144-cb193897ec0e" 00:18:26.950 ], 00:18:26.950 "product_name": "Malloc disk", 00:18:26.950 "block_size": 512, 00:18:26.950 "num_blocks": 65536, 00:18:26.950 "uuid": "593f3e39-7c7a-49bd-a144-cb193897ec0e", 00:18:26.950 "assigned_rate_limits": { 00:18:26.950 "rw_ios_per_sec": 0, 00:18:26.950 "rw_mbytes_per_sec": 0, 00:18:26.950 "r_mbytes_per_sec": 0, 00:18:26.950 "w_mbytes_per_sec": 0 00:18:26.950 }, 00:18:26.950 "claimed": true, 00:18:26.950 "claim_type": "exclusive_write", 00:18:26.950 "zoned": false, 00:18:26.950 "supported_io_types": { 00:18:26.950 "read": true, 00:18:26.950 "write": true, 00:18:26.950 "unmap": true, 00:18:26.950 "flush": true, 00:18:26.950 "reset": true, 00:18:26.950 "nvme_admin": false, 00:18:26.950 "nvme_io": false, 00:18:26.950 "nvme_io_md": false, 00:18:26.950 "write_zeroes": true, 00:18:26.950 "zcopy": true, 00:18:26.950 "get_zone_info": false, 00:18:26.950 "zone_management": false, 00:18:26.950 "zone_append": false, 00:18:26.950 "compare": false, 00:18:26.950 "compare_and_write": false, 00:18:26.950 "abort": true, 00:18:26.950 "seek_hole": false, 00:18:26.950 "seek_data": false, 00:18:26.950 "copy": true, 00:18:26.950 "nvme_iov_md": false 00:18:26.950 }, 00:18:26.950 "memory_domains": [ 00:18:26.950 { 00:18:26.951 "dma_device_id": "system", 00:18:26.951 "dma_device_type": 1 00:18:26.951 }, 00:18:26.951 { 00:18:26.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.951 "dma_device_type": 2 00:18:26.951 } 00:18:26.951 ], 00:18:26.951 "driver_specific": {} 00:18:26.951 } 00:18:26.951 ] 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.951 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.209 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.209 "name": "Existed_Raid", 00:18:27.209 "uuid": "d96b919e-4c83-4543-9787-1b546f1b03a5", 00:18:27.209 "strip_size_kb": 64, 00:18:27.209 "state": "configuring", 00:18:27.209 "raid_level": "concat", 00:18:27.209 "superblock": true, 00:18:27.209 "num_base_bdevs": 4, 00:18:27.209 "num_base_bdevs_discovered": 2, 00:18:27.209 "num_base_bdevs_operational": 4, 00:18:27.209 "base_bdevs_list": [ 00:18:27.209 { 00:18:27.209 "name": "BaseBdev1", 00:18:27.209 "uuid": "1b0786ab-d3e2-4438-b0b2-0d3a15aa51c3", 00:18:27.209 "is_configured": true, 00:18:27.209 "data_offset": 2048, 00:18:27.209 "data_size": 63488 00:18:27.209 }, 00:18:27.209 { 00:18:27.209 "name": "BaseBdev2", 00:18:27.209 "uuid": "593f3e39-7c7a-49bd-a144-cb193897ec0e", 00:18:27.209 "is_configured": true, 00:18:27.209 "data_offset": 2048, 00:18:27.209 "data_size": 63488 00:18:27.209 }, 00:18:27.209 { 00:18:27.209 "name": "BaseBdev3", 00:18:27.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.209 "is_configured": false, 00:18:27.209 "data_offset": 0, 00:18:27.209 "data_size": 0 00:18:27.209 }, 00:18:27.209 { 00:18:27.209 "name": "BaseBdev4", 00:18:27.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.209 "is_configured": false, 00:18:27.209 "data_offset": 0, 00:18:27.209 "data_size": 0 00:18:27.209 } 00:18:27.209 ] 00:18:27.209 }' 00:18:27.209 10:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.209 10:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.773 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:28.031 [2024-07-25 10:33:31.626672] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:28.031 BaseBdev3 00:18:28.031 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:28.031 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:28.031 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:28.031 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:28.031 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:28.031 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:28.031 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.289 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:28.547 [ 00:18:28.547 { 00:18:28.547 "name": "BaseBdev3", 00:18:28.547 "aliases": [ 00:18:28.547 "658d6424-0b35-4b0a-82b3-7f767abcca04" 00:18:28.547 ], 00:18:28.547 "product_name": "Malloc disk", 00:18:28.547 "block_size": 512, 00:18:28.547 "num_blocks": 65536, 00:18:28.547 "uuid": "658d6424-0b35-4b0a-82b3-7f767abcca04", 00:18:28.547 "assigned_rate_limits": { 00:18:28.547 "rw_ios_per_sec": 0, 00:18:28.547 "rw_mbytes_per_sec": 0, 00:18:28.547 "r_mbytes_per_sec": 0, 00:18:28.547 "w_mbytes_per_sec": 0 00:18:28.547 }, 00:18:28.547 "claimed": true, 00:18:28.547 "claim_type": "exclusive_write", 00:18:28.547 "zoned": false, 00:18:28.547 "supported_io_types": { 00:18:28.547 "read": true, 00:18:28.547 "write": true, 00:18:28.547 "unmap": true, 00:18:28.547 "flush": true, 00:18:28.547 "reset": true, 00:18:28.547 "nvme_admin": false, 00:18:28.547 "nvme_io": false, 00:18:28.547 "nvme_io_md": false, 00:18:28.547 "write_zeroes": true, 00:18:28.547 "zcopy": true, 00:18:28.547 "get_zone_info": false, 00:18:28.547 "zone_management": false, 00:18:28.547 "zone_append": false, 00:18:28.547 "compare": false, 00:18:28.547 "compare_and_write": false, 00:18:28.547 "abort": true, 00:18:28.547 "seek_hole": false, 00:18:28.547 "seek_data": false, 00:18:28.547 "copy": true, 00:18:28.547 "nvme_iov_md": false 00:18:28.547 }, 00:18:28.547 "memory_domains": [ 00:18:28.547 { 00:18:28.547 "dma_device_id": "system", 00:18:28.547 "dma_device_type": 1 00:18:28.547 }, 00:18:28.547 { 00:18:28.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.547 "dma_device_type": 2 00:18:28.547 } 00:18:28.547 ], 00:18:28.547 "driver_specific": {} 00:18:28.547 } 00:18:28.547 ] 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.547 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.806 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:28.806 "name": "Existed_Raid", 00:18:28.806 "uuid": "d96b919e-4c83-4543-9787-1b546f1b03a5", 00:18:28.806 "strip_size_kb": 64, 00:18:28.806 "state": "configuring", 00:18:28.806 "raid_level": "concat", 00:18:28.806 "superblock": true, 00:18:28.806 "num_base_bdevs": 4, 00:18:28.806 "num_base_bdevs_discovered": 3, 00:18:28.806 "num_base_bdevs_operational": 4, 00:18:28.806 "base_bdevs_list": [ 00:18:28.806 { 00:18:28.806 "name": "BaseBdev1", 00:18:28.806 "uuid": "1b0786ab-d3e2-4438-b0b2-0d3a15aa51c3", 00:18:28.806 "is_configured": true, 00:18:28.806 "data_offset": 2048, 00:18:28.806 "data_size": 63488 00:18:28.806 }, 00:18:28.806 { 00:18:28.806 "name": "BaseBdev2", 00:18:28.806 "uuid": "593f3e39-7c7a-49bd-a144-cb193897ec0e", 00:18:28.806 "is_configured": true, 00:18:28.806 "data_offset": 2048, 00:18:28.806 "data_size": 63488 00:18:28.806 }, 00:18:28.806 { 00:18:28.806 "name": "BaseBdev3", 00:18:28.806 "uuid": "658d6424-0b35-4b0a-82b3-7f767abcca04", 00:18:28.806 "is_configured": true, 00:18:28.806 "data_offset": 2048, 00:18:28.806 "data_size": 63488 00:18:28.806 }, 00:18:28.806 { 00:18:28.806 "name": "BaseBdev4", 00:18:28.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.806 "is_configured": false, 00:18:28.806 "data_offset": 0, 00:18:28.806 "data_size": 0 00:18:28.806 } 00:18:28.806 ] 00:18:28.806 }' 00:18:28.806 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:28.806 10:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.404 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:29.670 [2024-07-25 10:33:33.256948] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:29.670 [2024-07-25 10:33:33.257208] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xbd2cb0 00:18:29.670 [2024-07-25 10:33:33.257227] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:29.670 [2024-07-25 10:33:33.257405] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbe9c70 00:18:29.670 [2024-07-25 10:33:33.257561] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbd2cb0 00:18:29.670 [2024-07-25 10:33:33.257578] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xbd2cb0 00:18:29.670 [2024-07-25 10:33:33.257685] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.670 BaseBdev4 00:18:29.670 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:18:29.670 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:29.670 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:29.670 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:29.670 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:29.670 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:29.670 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:29.927 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:30.185 [ 00:18:30.185 { 00:18:30.185 "name": "BaseBdev4", 00:18:30.185 "aliases": [ 00:18:30.185 "3fbb8685-bb88-4fbd-a137-f0afc71fcda6" 00:18:30.185 ], 00:18:30.185 "product_name": "Malloc disk", 00:18:30.185 "block_size": 512, 00:18:30.185 "num_blocks": 65536, 00:18:30.185 "uuid": "3fbb8685-bb88-4fbd-a137-f0afc71fcda6", 00:18:30.185 "assigned_rate_limits": { 00:18:30.185 "rw_ios_per_sec": 0, 00:18:30.185 "rw_mbytes_per_sec": 0, 00:18:30.185 "r_mbytes_per_sec": 0, 00:18:30.185 "w_mbytes_per_sec": 0 00:18:30.185 }, 00:18:30.185 "claimed": true, 00:18:30.185 "claim_type": "exclusive_write", 00:18:30.185 "zoned": false, 00:18:30.185 "supported_io_types": { 00:18:30.185 "read": true, 00:18:30.185 "write": true, 00:18:30.185 "unmap": true, 00:18:30.185 "flush": true, 00:18:30.185 "reset": true, 00:18:30.185 "nvme_admin": false, 00:18:30.185 "nvme_io": false, 00:18:30.185 "nvme_io_md": false, 00:18:30.185 "write_zeroes": true, 00:18:30.185 "zcopy": true, 00:18:30.185 "get_zone_info": false, 00:18:30.185 "zone_management": false, 00:18:30.185 "zone_append": false, 00:18:30.185 "compare": false, 00:18:30.185 "compare_and_write": false, 00:18:30.185 "abort": true, 00:18:30.185 "seek_hole": false, 00:18:30.185 "seek_data": false, 00:18:30.185 "copy": true, 00:18:30.185 "nvme_iov_md": false 00:18:30.185 }, 00:18:30.186 "memory_domains": [ 00:18:30.186 { 00:18:30.186 "dma_device_id": "system", 00:18:30.186 "dma_device_type": 1 00:18:30.186 }, 00:18:30.186 { 00:18:30.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.186 "dma_device_type": 2 00:18:30.186 } 00:18:30.186 ], 00:18:30.186 "driver_specific": {} 00:18:30.186 } 00:18:30.186 ] 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.186 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.444 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:30.444 "name": "Existed_Raid", 00:18:30.444 "uuid": "d96b919e-4c83-4543-9787-1b546f1b03a5", 00:18:30.444 "strip_size_kb": 64, 00:18:30.444 "state": "online", 00:18:30.444 "raid_level": "concat", 00:18:30.444 "superblock": true, 00:18:30.444 "num_base_bdevs": 4, 00:18:30.444 "num_base_bdevs_discovered": 4, 00:18:30.444 "num_base_bdevs_operational": 4, 00:18:30.444 "base_bdevs_list": [ 00:18:30.444 { 00:18:30.444 "name": "BaseBdev1", 00:18:30.444 "uuid": "1b0786ab-d3e2-4438-b0b2-0d3a15aa51c3", 00:18:30.444 "is_configured": true, 00:18:30.444 "data_offset": 2048, 00:18:30.444 "data_size": 63488 00:18:30.444 }, 00:18:30.444 { 00:18:30.444 "name": "BaseBdev2", 00:18:30.444 "uuid": "593f3e39-7c7a-49bd-a144-cb193897ec0e", 00:18:30.444 "is_configured": true, 00:18:30.444 "data_offset": 2048, 00:18:30.444 "data_size": 63488 00:18:30.444 }, 00:18:30.444 { 00:18:30.444 "name": "BaseBdev3", 00:18:30.444 "uuid": "658d6424-0b35-4b0a-82b3-7f767abcca04", 00:18:30.444 "is_configured": true, 00:18:30.444 "data_offset": 2048, 00:18:30.444 "data_size": 63488 00:18:30.444 }, 00:18:30.444 { 00:18:30.444 "name": "BaseBdev4", 00:18:30.444 "uuid": "3fbb8685-bb88-4fbd-a137-f0afc71fcda6", 00:18:30.444 "is_configured": true, 00:18:30.444 "data_offset": 2048, 00:18:30.444 "data_size": 63488 00:18:30.444 } 00:18:30.444 ] 00:18:30.444 }' 00:18:30.444 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:30.444 10:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.011 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:31.011 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:31.011 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:31.011 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:31.011 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:31.011 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:31.011 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:31.011 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:31.269 [2024-07-25 10:33:34.797401] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.269 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:31.269 "name": "Existed_Raid", 00:18:31.269 "aliases": [ 00:18:31.269 "d96b919e-4c83-4543-9787-1b546f1b03a5" 00:18:31.269 ], 00:18:31.269 "product_name": "Raid Volume", 00:18:31.269 "block_size": 512, 00:18:31.269 "num_blocks": 253952, 00:18:31.269 "uuid": "d96b919e-4c83-4543-9787-1b546f1b03a5", 00:18:31.269 "assigned_rate_limits": { 00:18:31.269 "rw_ios_per_sec": 0, 00:18:31.269 "rw_mbytes_per_sec": 0, 00:18:31.269 "r_mbytes_per_sec": 0, 00:18:31.269 "w_mbytes_per_sec": 0 00:18:31.269 }, 00:18:31.269 "claimed": false, 00:18:31.269 "zoned": false, 00:18:31.269 "supported_io_types": { 00:18:31.269 "read": true, 00:18:31.269 "write": true, 00:18:31.269 "unmap": true, 00:18:31.269 "flush": true, 00:18:31.269 "reset": true, 00:18:31.269 "nvme_admin": false, 00:18:31.269 "nvme_io": false, 00:18:31.269 "nvme_io_md": false, 00:18:31.269 "write_zeroes": true, 00:18:31.269 "zcopy": false, 00:18:31.269 "get_zone_info": false, 00:18:31.269 "zone_management": false, 00:18:31.269 "zone_append": false, 00:18:31.269 "compare": false, 00:18:31.269 "compare_and_write": false, 00:18:31.269 "abort": false, 00:18:31.269 "seek_hole": false, 00:18:31.269 "seek_data": false, 00:18:31.269 "copy": false, 00:18:31.269 "nvme_iov_md": false 00:18:31.269 }, 00:18:31.269 "memory_domains": [ 00:18:31.269 { 00:18:31.269 "dma_device_id": "system", 00:18:31.269 "dma_device_type": 1 00:18:31.269 }, 00:18:31.269 { 00:18:31.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.269 "dma_device_type": 2 00:18:31.269 }, 00:18:31.269 { 00:18:31.269 "dma_device_id": "system", 00:18:31.269 "dma_device_type": 1 00:18:31.269 }, 00:18:31.269 { 00:18:31.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.269 "dma_device_type": 2 00:18:31.269 }, 00:18:31.269 { 00:18:31.269 "dma_device_id": "system", 00:18:31.269 "dma_device_type": 1 00:18:31.269 }, 00:18:31.269 { 00:18:31.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.269 "dma_device_type": 2 00:18:31.269 }, 00:18:31.269 { 00:18:31.269 "dma_device_id": "system", 00:18:31.269 "dma_device_type": 1 00:18:31.269 }, 00:18:31.269 { 00:18:31.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.269 "dma_device_type": 2 00:18:31.269 } 00:18:31.269 ], 00:18:31.269 "driver_specific": { 00:18:31.269 "raid": { 00:18:31.269 "uuid": "d96b919e-4c83-4543-9787-1b546f1b03a5", 00:18:31.269 "strip_size_kb": 64, 00:18:31.269 "state": "online", 00:18:31.269 "raid_level": "concat", 00:18:31.269 "superblock": true, 00:18:31.269 "num_base_bdevs": 4, 00:18:31.269 "num_base_bdevs_discovered": 4, 00:18:31.269 "num_base_bdevs_operational": 4, 00:18:31.269 "base_bdevs_list": [ 00:18:31.269 { 00:18:31.269 "name": "BaseBdev1", 00:18:31.269 "uuid": "1b0786ab-d3e2-4438-b0b2-0d3a15aa51c3", 00:18:31.269 "is_configured": true, 00:18:31.269 "data_offset": 2048, 00:18:31.269 "data_size": 63488 00:18:31.269 }, 00:18:31.269 { 00:18:31.269 "name": "BaseBdev2", 00:18:31.269 "uuid": "593f3e39-7c7a-49bd-a144-cb193897ec0e", 00:18:31.269 "is_configured": true, 00:18:31.269 "data_offset": 2048, 00:18:31.269 "data_size": 63488 00:18:31.269 }, 00:18:31.269 { 00:18:31.269 "name": "BaseBdev3", 00:18:31.269 "uuid": "658d6424-0b35-4b0a-82b3-7f767abcca04", 00:18:31.269 "is_configured": true, 00:18:31.269 "data_offset": 2048, 00:18:31.269 "data_size": 63488 00:18:31.269 }, 00:18:31.269 { 00:18:31.269 "name": "BaseBdev4", 00:18:31.269 "uuid": "3fbb8685-bb88-4fbd-a137-f0afc71fcda6", 00:18:31.269 "is_configured": true, 00:18:31.269 "data_offset": 2048, 00:18:31.269 "data_size": 63488 00:18:31.269 } 00:18:31.269 ] 00:18:31.269 } 00:18:31.269 } 00:18:31.269 }' 00:18:31.269 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:31.269 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:31.269 BaseBdev2 00:18:31.269 BaseBdev3 00:18:31.269 BaseBdev4' 00:18:31.269 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:31.269 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:31.269 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:31.528 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:31.528 "name": "BaseBdev1", 00:18:31.528 "aliases": [ 00:18:31.528 "1b0786ab-d3e2-4438-b0b2-0d3a15aa51c3" 00:18:31.528 ], 00:18:31.528 "product_name": "Malloc disk", 00:18:31.528 "block_size": 512, 00:18:31.528 "num_blocks": 65536, 00:18:31.528 "uuid": "1b0786ab-d3e2-4438-b0b2-0d3a15aa51c3", 00:18:31.528 "assigned_rate_limits": { 00:18:31.528 "rw_ios_per_sec": 0, 00:18:31.528 "rw_mbytes_per_sec": 0, 00:18:31.528 "r_mbytes_per_sec": 0, 00:18:31.528 "w_mbytes_per_sec": 0 00:18:31.528 }, 00:18:31.528 "claimed": true, 00:18:31.528 "claim_type": "exclusive_write", 00:18:31.528 "zoned": false, 00:18:31.528 "supported_io_types": { 00:18:31.528 "read": true, 00:18:31.528 "write": true, 00:18:31.528 "unmap": true, 00:18:31.528 "flush": true, 00:18:31.528 "reset": true, 00:18:31.528 "nvme_admin": false, 00:18:31.528 "nvme_io": false, 00:18:31.528 "nvme_io_md": false, 00:18:31.528 "write_zeroes": true, 00:18:31.528 "zcopy": true, 00:18:31.528 "get_zone_info": false, 00:18:31.528 "zone_management": false, 00:18:31.528 "zone_append": false, 00:18:31.528 "compare": false, 00:18:31.529 "compare_and_write": false, 00:18:31.529 "abort": true, 00:18:31.529 "seek_hole": false, 00:18:31.529 "seek_data": false, 00:18:31.529 "copy": true, 00:18:31.529 "nvme_iov_md": false 00:18:31.529 }, 00:18:31.529 "memory_domains": [ 00:18:31.529 { 00:18:31.529 "dma_device_id": "system", 00:18:31.529 "dma_device_type": 1 00:18:31.529 }, 00:18:31.529 { 00:18:31.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.529 "dma_device_type": 2 00:18:31.529 } 00:18:31.529 ], 00:18:31.529 "driver_specific": {} 00:18:31.529 }' 00:18:31.529 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.529 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.529 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:31.529 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.529 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.787 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:31.787 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.787 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.787 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:31.787 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.787 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.787 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:31.787 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:31.787 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:31.787 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:32.045 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:32.045 "name": "BaseBdev2", 00:18:32.045 "aliases": [ 00:18:32.045 "593f3e39-7c7a-49bd-a144-cb193897ec0e" 00:18:32.045 ], 00:18:32.045 "product_name": "Malloc disk", 00:18:32.045 "block_size": 512, 00:18:32.045 "num_blocks": 65536, 00:18:32.045 "uuid": "593f3e39-7c7a-49bd-a144-cb193897ec0e", 00:18:32.045 "assigned_rate_limits": { 00:18:32.045 "rw_ios_per_sec": 0, 00:18:32.045 "rw_mbytes_per_sec": 0, 00:18:32.045 "r_mbytes_per_sec": 0, 00:18:32.045 "w_mbytes_per_sec": 0 00:18:32.045 }, 00:18:32.045 "claimed": true, 00:18:32.045 "claim_type": "exclusive_write", 00:18:32.045 "zoned": false, 00:18:32.045 "supported_io_types": { 00:18:32.045 "read": true, 00:18:32.045 "write": true, 00:18:32.045 "unmap": true, 00:18:32.045 "flush": true, 00:18:32.045 "reset": true, 00:18:32.045 "nvme_admin": false, 00:18:32.045 "nvme_io": false, 00:18:32.045 "nvme_io_md": false, 00:18:32.045 "write_zeroes": true, 00:18:32.045 "zcopy": true, 00:18:32.045 "get_zone_info": false, 00:18:32.045 "zone_management": false, 00:18:32.045 "zone_append": false, 00:18:32.045 "compare": false, 00:18:32.045 "compare_and_write": false, 00:18:32.045 "abort": true, 00:18:32.045 "seek_hole": false, 00:18:32.045 "seek_data": false, 00:18:32.045 "copy": true, 00:18:32.045 "nvme_iov_md": false 00:18:32.045 }, 00:18:32.045 "memory_domains": [ 00:18:32.045 { 00:18:32.045 "dma_device_id": "system", 00:18:32.045 "dma_device_type": 1 00:18:32.045 }, 00:18:32.045 { 00:18:32.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.045 "dma_device_type": 2 00:18:32.045 } 00:18:32.045 ], 00:18:32.045 "driver_specific": {} 00:18:32.045 }' 00:18:32.045 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.045 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.045 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:32.045 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.045 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.303 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:32.303 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.303 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.303 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:32.303 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.303 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.303 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:32.303 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:32.303 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:32.303 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:32.561 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:32.561 "name": "BaseBdev3", 00:18:32.561 "aliases": [ 00:18:32.561 "658d6424-0b35-4b0a-82b3-7f767abcca04" 00:18:32.561 ], 00:18:32.561 "product_name": "Malloc disk", 00:18:32.561 "block_size": 512, 00:18:32.561 "num_blocks": 65536, 00:18:32.561 "uuid": "658d6424-0b35-4b0a-82b3-7f767abcca04", 00:18:32.561 "assigned_rate_limits": { 00:18:32.561 "rw_ios_per_sec": 0, 00:18:32.561 "rw_mbytes_per_sec": 0, 00:18:32.561 "r_mbytes_per_sec": 0, 00:18:32.561 "w_mbytes_per_sec": 0 00:18:32.561 }, 00:18:32.561 "claimed": true, 00:18:32.561 "claim_type": "exclusive_write", 00:18:32.561 "zoned": false, 00:18:32.561 "supported_io_types": { 00:18:32.561 "read": true, 00:18:32.561 "write": true, 00:18:32.561 "unmap": true, 00:18:32.561 "flush": true, 00:18:32.561 "reset": true, 00:18:32.561 "nvme_admin": false, 00:18:32.561 "nvme_io": false, 00:18:32.561 "nvme_io_md": false, 00:18:32.561 "write_zeroes": true, 00:18:32.561 "zcopy": true, 00:18:32.561 "get_zone_info": false, 00:18:32.561 "zone_management": false, 00:18:32.561 "zone_append": false, 00:18:32.561 "compare": false, 00:18:32.561 "compare_and_write": false, 00:18:32.561 "abort": true, 00:18:32.561 "seek_hole": false, 00:18:32.561 "seek_data": false, 00:18:32.561 "copy": true, 00:18:32.561 "nvme_iov_md": false 00:18:32.561 }, 00:18:32.561 "memory_domains": [ 00:18:32.561 { 00:18:32.561 "dma_device_id": "system", 00:18:32.561 "dma_device_type": 1 00:18:32.561 }, 00:18:32.561 { 00:18:32.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.561 "dma_device_type": 2 00:18:32.561 } 00:18:32.561 ], 00:18:32.561 "driver_specific": {} 00:18:32.561 }' 00:18:32.561 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.561 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.561 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:32.561 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.819 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.819 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:32.819 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.819 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.819 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:32.819 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.819 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.819 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:32.819 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:32.819 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:32.819 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:33.078 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:33.078 "name": "BaseBdev4", 00:18:33.078 "aliases": [ 00:18:33.078 "3fbb8685-bb88-4fbd-a137-f0afc71fcda6" 00:18:33.078 ], 00:18:33.078 "product_name": "Malloc disk", 00:18:33.078 "block_size": 512, 00:18:33.078 "num_blocks": 65536, 00:18:33.078 "uuid": "3fbb8685-bb88-4fbd-a137-f0afc71fcda6", 00:18:33.078 "assigned_rate_limits": { 00:18:33.078 "rw_ios_per_sec": 0, 00:18:33.078 "rw_mbytes_per_sec": 0, 00:18:33.078 "r_mbytes_per_sec": 0, 00:18:33.078 "w_mbytes_per_sec": 0 00:18:33.078 }, 00:18:33.078 "claimed": true, 00:18:33.078 "claim_type": "exclusive_write", 00:18:33.078 "zoned": false, 00:18:33.078 "supported_io_types": { 00:18:33.078 "read": true, 00:18:33.078 "write": true, 00:18:33.078 "unmap": true, 00:18:33.078 "flush": true, 00:18:33.078 "reset": true, 00:18:33.078 "nvme_admin": false, 00:18:33.078 "nvme_io": false, 00:18:33.078 "nvme_io_md": false, 00:18:33.078 "write_zeroes": true, 00:18:33.078 "zcopy": true, 00:18:33.078 "get_zone_info": false, 00:18:33.078 "zone_management": false, 00:18:33.078 "zone_append": false, 00:18:33.078 "compare": false, 00:18:33.078 "compare_and_write": false, 00:18:33.078 "abort": true, 00:18:33.078 "seek_hole": false, 00:18:33.078 "seek_data": false, 00:18:33.078 "copy": true, 00:18:33.078 "nvme_iov_md": false 00:18:33.078 }, 00:18:33.078 "memory_domains": [ 00:18:33.078 { 00:18:33.078 "dma_device_id": "system", 00:18:33.078 "dma_device_type": 1 00:18:33.078 }, 00:18:33.078 { 00:18:33.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.078 "dma_device_type": 2 00:18:33.078 } 00:18:33.078 ], 00:18:33.078 "driver_specific": {} 00:18:33.078 }' 00:18:33.078 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:33.078 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:33.340 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:33.340 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:33.340 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:33.340 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:33.340 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:33.340 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:33.340 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:33.340 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:33.340 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:33.340 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:33.340 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:33.599 [2024-07-25 10:33:37.235650] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:33.599 [2024-07-25 10:33:37.235682] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.599 [2024-07-25 10:33:37.235734] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.599 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.857 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.857 "name": "Existed_Raid", 00:18:33.857 "uuid": "d96b919e-4c83-4543-9787-1b546f1b03a5", 00:18:33.857 "strip_size_kb": 64, 00:18:33.857 "state": "offline", 00:18:33.857 "raid_level": "concat", 00:18:33.857 "superblock": true, 00:18:33.857 "num_base_bdevs": 4, 00:18:33.857 "num_base_bdevs_discovered": 3, 00:18:33.857 "num_base_bdevs_operational": 3, 00:18:33.857 "base_bdevs_list": [ 00:18:33.857 { 00:18:33.857 "name": null, 00:18:33.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.857 "is_configured": false, 00:18:33.857 "data_offset": 2048, 00:18:33.857 "data_size": 63488 00:18:33.857 }, 00:18:33.857 { 00:18:33.857 "name": "BaseBdev2", 00:18:33.857 "uuid": "593f3e39-7c7a-49bd-a144-cb193897ec0e", 00:18:33.857 "is_configured": true, 00:18:33.857 "data_offset": 2048, 00:18:33.857 "data_size": 63488 00:18:33.857 }, 00:18:33.857 { 00:18:33.857 "name": "BaseBdev3", 00:18:33.857 "uuid": "658d6424-0b35-4b0a-82b3-7f767abcca04", 00:18:33.857 "is_configured": true, 00:18:33.857 "data_offset": 2048, 00:18:33.857 "data_size": 63488 00:18:33.857 }, 00:18:33.857 { 00:18:33.857 "name": "BaseBdev4", 00:18:33.857 "uuid": "3fbb8685-bb88-4fbd-a137-f0afc71fcda6", 00:18:33.857 "is_configured": true, 00:18:33.857 "data_offset": 2048, 00:18:33.857 "data_size": 63488 00:18:33.857 } 00:18:33.857 ] 00:18:33.857 }' 00:18:33.857 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.857 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.422 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:34.422 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:34.422 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.422 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:34.680 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:34.680 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:34.680 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:34.938 [2024-07-25 10:33:38.589004] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:34.938 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:34.938 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:34.938 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.938 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:35.504 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:35.504 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:35.504 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:35.504 [2024-07-25 10:33:39.183548] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:35.504 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:35.504 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:35.504 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.762 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:36.021 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:36.021 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:36.021 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:36.279 [2024-07-25 10:33:39.734306] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:36.279 [2024-07-25 10:33:39.734372] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbd2cb0 name Existed_Raid, state offline 00:18:36.279 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:36.279 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:36.279 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.279 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:36.537 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:36.537 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:36.537 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:18:36.537 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:36.537 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:36.537 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:36.537 BaseBdev2 00:18:36.795 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:36.795 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:36.795 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:36.795 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:36.795 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:36.795 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:36.795 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:37.053 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:37.311 [ 00:18:37.311 { 00:18:37.311 "name": "BaseBdev2", 00:18:37.311 "aliases": [ 00:18:37.311 "b8323100-f3fb-4536-9b82-dced183491a9" 00:18:37.311 ], 00:18:37.311 "product_name": "Malloc disk", 00:18:37.311 "block_size": 512, 00:18:37.311 "num_blocks": 65536, 00:18:37.311 "uuid": "b8323100-f3fb-4536-9b82-dced183491a9", 00:18:37.311 "assigned_rate_limits": { 00:18:37.311 "rw_ios_per_sec": 0, 00:18:37.311 "rw_mbytes_per_sec": 0, 00:18:37.311 "r_mbytes_per_sec": 0, 00:18:37.311 "w_mbytes_per_sec": 0 00:18:37.311 }, 00:18:37.311 "claimed": false, 00:18:37.311 "zoned": false, 00:18:37.311 "supported_io_types": { 00:18:37.311 "read": true, 00:18:37.311 "write": true, 00:18:37.311 "unmap": true, 00:18:37.311 "flush": true, 00:18:37.311 "reset": true, 00:18:37.311 "nvme_admin": false, 00:18:37.311 "nvme_io": false, 00:18:37.311 "nvme_io_md": false, 00:18:37.311 "write_zeroes": true, 00:18:37.311 "zcopy": true, 00:18:37.311 "get_zone_info": false, 00:18:37.311 "zone_management": false, 00:18:37.311 "zone_append": false, 00:18:37.311 "compare": false, 00:18:37.311 "compare_and_write": false, 00:18:37.311 "abort": true, 00:18:37.311 "seek_hole": false, 00:18:37.311 "seek_data": false, 00:18:37.311 "copy": true, 00:18:37.311 "nvme_iov_md": false 00:18:37.311 }, 00:18:37.311 "memory_domains": [ 00:18:37.311 { 00:18:37.311 "dma_device_id": "system", 00:18:37.311 "dma_device_type": 1 00:18:37.311 }, 00:18:37.311 { 00:18:37.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.311 "dma_device_type": 2 00:18:37.311 } 00:18:37.311 ], 00:18:37.311 "driver_specific": {} 00:18:37.311 } 00:18:37.311 ] 00:18:37.312 10:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:37.312 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:37.312 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:37.312 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:37.570 BaseBdev3 00:18:37.570 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:37.570 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:37.570 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:37.570 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:37.570 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:37.570 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:37.570 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:37.827 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:38.085 [ 00:18:38.085 { 00:18:38.085 "name": "BaseBdev3", 00:18:38.085 "aliases": [ 00:18:38.085 "7e43116f-6f28-4d2e-8a06-442b815fbf96" 00:18:38.085 ], 00:18:38.085 "product_name": "Malloc disk", 00:18:38.085 "block_size": 512, 00:18:38.085 "num_blocks": 65536, 00:18:38.085 "uuid": "7e43116f-6f28-4d2e-8a06-442b815fbf96", 00:18:38.085 "assigned_rate_limits": { 00:18:38.085 "rw_ios_per_sec": 0, 00:18:38.085 "rw_mbytes_per_sec": 0, 00:18:38.085 "r_mbytes_per_sec": 0, 00:18:38.085 "w_mbytes_per_sec": 0 00:18:38.085 }, 00:18:38.085 "claimed": false, 00:18:38.085 "zoned": false, 00:18:38.085 "supported_io_types": { 00:18:38.085 "read": true, 00:18:38.085 "write": true, 00:18:38.085 "unmap": true, 00:18:38.085 "flush": true, 00:18:38.085 "reset": true, 00:18:38.085 "nvme_admin": false, 00:18:38.085 "nvme_io": false, 00:18:38.085 "nvme_io_md": false, 00:18:38.085 "write_zeroes": true, 00:18:38.085 "zcopy": true, 00:18:38.085 "get_zone_info": false, 00:18:38.085 "zone_management": false, 00:18:38.085 "zone_append": false, 00:18:38.085 "compare": false, 00:18:38.085 "compare_and_write": false, 00:18:38.085 "abort": true, 00:18:38.085 "seek_hole": false, 00:18:38.085 "seek_data": false, 00:18:38.085 "copy": true, 00:18:38.085 "nvme_iov_md": false 00:18:38.085 }, 00:18:38.085 "memory_domains": [ 00:18:38.085 { 00:18:38.085 "dma_device_id": "system", 00:18:38.085 "dma_device_type": 1 00:18:38.085 }, 00:18:38.085 { 00:18:38.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.085 "dma_device_type": 2 00:18:38.085 } 00:18:38.085 ], 00:18:38.085 "driver_specific": {} 00:18:38.085 } 00:18:38.085 ] 00:18:38.085 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:38.085 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:38.085 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:38.085 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:38.343 BaseBdev4 00:18:38.343 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:18:38.343 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:38.343 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:38.343 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:38.343 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:38.343 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:38.343 10:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:38.601 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:38.859 [ 00:18:38.859 { 00:18:38.859 "name": "BaseBdev4", 00:18:38.859 "aliases": [ 00:18:38.859 "5d2e6847-5aa7-4c17-a84e-e363e7474edd" 00:18:38.859 ], 00:18:38.859 "product_name": "Malloc disk", 00:18:38.859 "block_size": 512, 00:18:38.859 "num_blocks": 65536, 00:18:38.859 "uuid": "5d2e6847-5aa7-4c17-a84e-e363e7474edd", 00:18:38.859 "assigned_rate_limits": { 00:18:38.859 "rw_ios_per_sec": 0, 00:18:38.859 "rw_mbytes_per_sec": 0, 00:18:38.859 "r_mbytes_per_sec": 0, 00:18:38.859 "w_mbytes_per_sec": 0 00:18:38.859 }, 00:18:38.859 "claimed": false, 00:18:38.859 "zoned": false, 00:18:38.859 "supported_io_types": { 00:18:38.859 "read": true, 00:18:38.859 "write": true, 00:18:38.859 "unmap": true, 00:18:38.859 "flush": true, 00:18:38.859 "reset": true, 00:18:38.859 "nvme_admin": false, 00:18:38.859 "nvme_io": false, 00:18:38.859 "nvme_io_md": false, 00:18:38.859 "write_zeroes": true, 00:18:38.859 "zcopy": true, 00:18:38.859 "get_zone_info": false, 00:18:38.859 "zone_management": false, 00:18:38.859 "zone_append": false, 00:18:38.859 "compare": false, 00:18:38.859 "compare_and_write": false, 00:18:38.859 "abort": true, 00:18:38.859 "seek_hole": false, 00:18:38.859 "seek_data": false, 00:18:38.859 "copy": true, 00:18:38.859 "nvme_iov_md": false 00:18:38.859 }, 00:18:38.859 "memory_domains": [ 00:18:38.859 { 00:18:38.859 "dma_device_id": "system", 00:18:38.859 "dma_device_type": 1 00:18:38.859 }, 00:18:38.859 { 00:18:38.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.860 "dma_device_type": 2 00:18:38.860 } 00:18:38.860 ], 00:18:38.860 "driver_specific": {} 00:18:38.860 } 00:18:38.860 ] 00:18:38.860 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:38.860 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:38.860 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:38.860 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:39.118 [2024-07-25 10:33:42.597949] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:39.118 [2024-07-25 10:33:42.597993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:39.118 [2024-07-25 10:33:42.598037] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.118 [2024-07-25 10:33:42.599439] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:39.118 [2024-07-25 10:33:42.599496] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:39.118 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:39.118 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:39.118 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:39.118 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:39.118 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:39.118 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:39.118 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:39.118 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:39.118 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:39.118 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:39.118 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.118 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.376 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:39.376 "name": "Existed_Raid", 00:18:39.376 "uuid": "b823c579-617b-4b27-8fde-750ad4afceaa", 00:18:39.376 "strip_size_kb": 64, 00:18:39.376 "state": "configuring", 00:18:39.376 "raid_level": "concat", 00:18:39.376 "superblock": true, 00:18:39.376 "num_base_bdevs": 4, 00:18:39.376 "num_base_bdevs_discovered": 3, 00:18:39.376 "num_base_bdevs_operational": 4, 00:18:39.376 "base_bdevs_list": [ 00:18:39.376 { 00:18:39.376 "name": "BaseBdev1", 00:18:39.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.376 "is_configured": false, 00:18:39.376 "data_offset": 0, 00:18:39.376 "data_size": 0 00:18:39.376 }, 00:18:39.376 { 00:18:39.376 "name": "BaseBdev2", 00:18:39.376 "uuid": "b8323100-f3fb-4536-9b82-dced183491a9", 00:18:39.376 "is_configured": true, 00:18:39.376 "data_offset": 2048, 00:18:39.376 "data_size": 63488 00:18:39.376 }, 00:18:39.376 { 00:18:39.376 "name": "BaseBdev3", 00:18:39.376 "uuid": "7e43116f-6f28-4d2e-8a06-442b815fbf96", 00:18:39.376 "is_configured": true, 00:18:39.376 "data_offset": 2048, 00:18:39.376 "data_size": 63488 00:18:39.376 }, 00:18:39.376 { 00:18:39.376 "name": "BaseBdev4", 00:18:39.376 "uuid": "5d2e6847-5aa7-4c17-a84e-e363e7474edd", 00:18:39.376 "is_configured": true, 00:18:39.376 "data_offset": 2048, 00:18:39.376 "data_size": 63488 00:18:39.376 } 00:18:39.376 ] 00:18:39.376 }' 00:18:39.376 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:39.376 10:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.942 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:39.942 [2024-07-25 10:33:43.632654] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:40.200 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:40.200 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:40.200 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.200 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:40.200 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:40.200 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:40.200 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.200 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.200 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.200 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.200 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.200 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.459 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.459 "name": "Existed_Raid", 00:18:40.459 "uuid": "b823c579-617b-4b27-8fde-750ad4afceaa", 00:18:40.459 "strip_size_kb": 64, 00:18:40.459 "state": "configuring", 00:18:40.459 "raid_level": "concat", 00:18:40.459 "superblock": true, 00:18:40.459 "num_base_bdevs": 4, 00:18:40.459 "num_base_bdevs_discovered": 2, 00:18:40.459 "num_base_bdevs_operational": 4, 00:18:40.459 "base_bdevs_list": [ 00:18:40.459 { 00:18:40.459 "name": "BaseBdev1", 00:18:40.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.459 "is_configured": false, 00:18:40.459 "data_offset": 0, 00:18:40.459 "data_size": 0 00:18:40.459 }, 00:18:40.459 { 00:18:40.459 "name": null, 00:18:40.459 "uuid": "b8323100-f3fb-4536-9b82-dced183491a9", 00:18:40.459 "is_configured": false, 00:18:40.459 "data_offset": 2048, 00:18:40.459 "data_size": 63488 00:18:40.459 }, 00:18:40.459 { 00:18:40.459 "name": "BaseBdev3", 00:18:40.459 "uuid": "7e43116f-6f28-4d2e-8a06-442b815fbf96", 00:18:40.459 "is_configured": true, 00:18:40.459 "data_offset": 2048, 00:18:40.459 "data_size": 63488 00:18:40.459 }, 00:18:40.459 { 00:18:40.459 "name": "BaseBdev4", 00:18:40.459 "uuid": "5d2e6847-5aa7-4c17-a84e-e363e7474edd", 00:18:40.459 "is_configured": true, 00:18:40.459 "data_offset": 2048, 00:18:40.459 "data_size": 63488 00:18:40.459 } 00:18:40.459 ] 00:18:40.459 }' 00:18:40.459 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.459 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.025 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.025 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:41.025 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:41.025 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:41.283 [2024-07-25 10:33:44.946283] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.283 BaseBdev1 00:18:41.283 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:41.283 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:41.283 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:41.283 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:41.283 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:41.283 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:41.283 10:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:41.541 10:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:41.799 [ 00:18:41.799 { 00:18:41.799 "name": "BaseBdev1", 00:18:41.799 "aliases": [ 00:18:41.800 "60f42eb5-4eeb-46a2-9f46-b2952d39f6bc" 00:18:41.800 ], 00:18:41.800 "product_name": "Malloc disk", 00:18:41.800 "block_size": 512, 00:18:41.800 "num_blocks": 65536, 00:18:41.800 "uuid": "60f42eb5-4eeb-46a2-9f46-b2952d39f6bc", 00:18:41.800 "assigned_rate_limits": { 00:18:41.800 "rw_ios_per_sec": 0, 00:18:41.800 "rw_mbytes_per_sec": 0, 00:18:41.800 "r_mbytes_per_sec": 0, 00:18:41.800 "w_mbytes_per_sec": 0 00:18:41.800 }, 00:18:41.800 "claimed": true, 00:18:41.800 "claim_type": "exclusive_write", 00:18:41.800 "zoned": false, 00:18:41.800 "supported_io_types": { 00:18:41.800 "read": true, 00:18:41.800 "write": true, 00:18:41.800 "unmap": true, 00:18:41.800 "flush": true, 00:18:41.800 "reset": true, 00:18:41.800 "nvme_admin": false, 00:18:41.800 "nvme_io": false, 00:18:41.800 "nvme_io_md": false, 00:18:41.800 "write_zeroes": true, 00:18:41.800 "zcopy": true, 00:18:41.800 "get_zone_info": false, 00:18:41.800 "zone_management": false, 00:18:41.800 "zone_append": false, 00:18:41.800 "compare": false, 00:18:41.800 "compare_and_write": false, 00:18:41.800 "abort": true, 00:18:41.800 "seek_hole": false, 00:18:41.800 "seek_data": false, 00:18:41.800 "copy": true, 00:18:41.800 "nvme_iov_md": false 00:18:41.800 }, 00:18:41.800 "memory_domains": [ 00:18:41.800 { 00:18:41.800 "dma_device_id": "system", 00:18:41.800 "dma_device_type": 1 00:18:41.800 }, 00:18:41.800 { 00:18:41.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.800 "dma_device_type": 2 00:18:41.800 } 00:18:41.800 ], 00:18:41.800 "driver_specific": {} 00:18:41.800 } 00:18:41.800 ] 00:18:41.800 10:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:41.800 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:41.800 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:41.800 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:41.800 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:41.800 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:41.800 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:41.800 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.800 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.800 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.800 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.800 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.800 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.058 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.058 "name": "Existed_Raid", 00:18:42.058 "uuid": "b823c579-617b-4b27-8fde-750ad4afceaa", 00:18:42.058 "strip_size_kb": 64, 00:18:42.058 "state": "configuring", 00:18:42.058 "raid_level": "concat", 00:18:42.058 "superblock": true, 00:18:42.058 "num_base_bdevs": 4, 00:18:42.058 "num_base_bdevs_discovered": 3, 00:18:42.058 "num_base_bdevs_operational": 4, 00:18:42.058 "base_bdevs_list": [ 00:18:42.058 { 00:18:42.058 "name": "BaseBdev1", 00:18:42.058 "uuid": "60f42eb5-4eeb-46a2-9f46-b2952d39f6bc", 00:18:42.058 "is_configured": true, 00:18:42.058 "data_offset": 2048, 00:18:42.058 "data_size": 63488 00:18:42.058 }, 00:18:42.058 { 00:18:42.058 "name": null, 00:18:42.058 "uuid": "b8323100-f3fb-4536-9b82-dced183491a9", 00:18:42.058 "is_configured": false, 00:18:42.058 "data_offset": 2048, 00:18:42.058 "data_size": 63488 00:18:42.058 }, 00:18:42.058 { 00:18:42.058 "name": "BaseBdev3", 00:18:42.058 "uuid": "7e43116f-6f28-4d2e-8a06-442b815fbf96", 00:18:42.058 "is_configured": true, 00:18:42.058 "data_offset": 2048, 00:18:42.058 "data_size": 63488 00:18:42.058 }, 00:18:42.058 { 00:18:42.058 "name": "BaseBdev4", 00:18:42.058 "uuid": "5d2e6847-5aa7-4c17-a84e-e363e7474edd", 00:18:42.058 "is_configured": true, 00:18:42.058 "data_offset": 2048, 00:18:42.058 "data_size": 63488 00:18:42.058 } 00:18:42.058 ] 00:18:42.058 }' 00:18:42.058 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.058 10:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.625 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.625 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:42.883 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:42.883 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:43.142 [2024-07-25 10:33:46.779248] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:43.142 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:43.142 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.142 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:43.142 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:43.142 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.142 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:43.142 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.142 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.142 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.142 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.142 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.142 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.400 10:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.400 "name": "Existed_Raid", 00:18:43.400 "uuid": "b823c579-617b-4b27-8fde-750ad4afceaa", 00:18:43.400 "strip_size_kb": 64, 00:18:43.400 "state": "configuring", 00:18:43.400 "raid_level": "concat", 00:18:43.400 "superblock": true, 00:18:43.400 "num_base_bdevs": 4, 00:18:43.400 "num_base_bdevs_discovered": 2, 00:18:43.400 "num_base_bdevs_operational": 4, 00:18:43.400 "base_bdevs_list": [ 00:18:43.400 { 00:18:43.400 "name": "BaseBdev1", 00:18:43.400 "uuid": "60f42eb5-4eeb-46a2-9f46-b2952d39f6bc", 00:18:43.400 "is_configured": true, 00:18:43.400 "data_offset": 2048, 00:18:43.400 "data_size": 63488 00:18:43.400 }, 00:18:43.400 { 00:18:43.400 "name": null, 00:18:43.400 "uuid": "b8323100-f3fb-4536-9b82-dced183491a9", 00:18:43.400 "is_configured": false, 00:18:43.400 "data_offset": 2048, 00:18:43.400 "data_size": 63488 00:18:43.400 }, 00:18:43.400 { 00:18:43.400 "name": null, 00:18:43.400 "uuid": "7e43116f-6f28-4d2e-8a06-442b815fbf96", 00:18:43.400 "is_configured": false, 00:18:43.400 "data_offset": 2048, 00:18:43.400 "data_size": 63488 00:18:43.400 }, 00:18:43.400 { 00:18:43.400 "name": "BaseBdev4", 00:18:43.400 "uuid": "5d2e6847-5aa7-4c17-a84e-e363e7474edd", 00:18:43.400 "is_configured": true, 00:18:43.400 "data_offset": 2048, 00:18:43.400 "data_size": 63488 00:18:43.400 } 00:18:43.400 ] 00:18:43.400 }' 00:18:43.400 10:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.400 10:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.966 10:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.966 10:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:44.224 10:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:44.224 10:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:44.483 [2024-07-25 10:33:48.130825] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:44.483 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:44.483 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:44.483 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:44.483 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:44.483 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:44.483 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:44.483 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:44.483 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:44.483 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:44.483 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:44.483 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.483 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.741 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:44.742 "name": "Existed_Raid", 00:18:44.742 "uuid": "b823c579-617b-4b27-8fde-750ad4afceaa", 00:18:44.742 "strip_size_kb": 64, 00:18:44.742 "state": "configuring", 00:18:44.742 "raid_level": "concat", 00:18:44.742 "superblock": true, 00:18:44.742 "num_base_bdevs": 4, 00:18:44.742 "num_base_bdevs_discovered": 3, 00:18:44.742 "num_base_bdevs_operational": 4, 00:18:44.742 "base_bdevs_list": [ 00:18:44.742 { 00:18:44.742 "name": "BaseBdev1", 00:18:44.742 "uuid": "60f42eb5-4eeb-46a2-9f46-b2952d39f6bc", 00:18:44.742 "is_configured": true, 00:18:44.742 "data_offset": 2048, 00:18:44.742 "data_size": 63488 00:18:44.742 }, 00:18:44.742 { 00:18:44.742 "name": null, 00:18:44.742 "uuid": "b8323100-f3fb-4536-9b82-dced183491a9", 00:18:44.742 "is_configured": false, 00:18:44.742 "data_offset": 2048, 00:18:44.742 "data_size": 63488 00:18:44.742 }, 00:18:44.742 { 00:18:44.742 "name": "BaseBdev3", 00:18:44.742 "uuid": "7e43116f-6f28-4d2e-8a06-442b815fbf96", 00:18:44.742 "is_configured": true, 00:18:44.742 "data_offset": 2048, 00:18:44.742 "data_size": 63488 00:18:44.742 }, 00:18:44.742 { 00:18:44.742 "name": "BaseBdev4", 00:18:44.742 "uuid": "5d2e6847-5aa7-4c17-a84e-e363e7474edd", 00:18:44.742 "is_configured": true, 00:18:44.742 "data_offset": 2048, 00:18:44.742 "data_size": 63488 00:18:44.742 } 00:18:44.742 ] 00:18:44.742 }' 00:18:44.742 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:44.742 10:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.307 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.307 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:45.565 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:45.566 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:45.860 [2024-07-25 10:33:49.438364] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:45.860 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:45.860 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:45.860 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:45.860 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:45.860 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:45.860 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:45.860 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.860 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.860 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.860 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.860 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.860 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.119 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:46.119 "name": "Existed_Raid", 00:18:46.119 "uuid": "b823c579-617b-4b27-8fde-750ad4afceaa", 00:18:46.119 "strip_size_kb": 64, 00:18:46.119 "state": "configuring", 00:18:46.119 "raid_level": "concat", 00:18:46.119 "superblock": true, 00:18:46.119 "num_base_bdevs": 4, 00:18:46.119 "num_base_bdevs_discovered": 2, 00:18:46.119 "num_base_bdevs_operational": 4, 00:18:46.119 "base_bdevs_list": [ 00:18:46.119 { 00:18:46.119 "name": null, 00:18:46.119 "uuid": "60f42eb5-4eeb-46a2-9f46-b2952d39f6bc", 00:18:46.119 "is_configured": false, 00:18:46.119 "data_offset": 2048, 00:18:46.119 "data_size": 63488 00:18:46.119 }, 00:18:46.119 { 00:18:46.119 "name": null, 00:18:46.119 "uuid": "b8323100-f3fb-4536-9b82-dced183491a9", 00:18:46.119 "is_configured": false, 00:18:46.119 "data_offset": 2048, 00:18:46.119 "data_size": 63488 00:18:46.119 }, 00:18:46.119 { 00:18:46.119 "name": "BaseBdev3", 00:18:46.119 "uuid": "7e43116f-6f28-4d2e-8a06-442b815fbf96", 00:18:46.119 "is_configured": true, 00:18:46.119 "data_offset": 2048, 00:18:46.119 "data_size": 63488 00:18:46.119 }, 00:18:46.119 { 00:18:46.119 "name": "BaseBdev4", 00:18:46.119 "uuid": "5d2e6847-5aa7-4c17-a84e-e363e7474edd", 00:18:46.119 "is_configured": true, 00:18:46.119 "data_offset": 2048, 00:18:46.119 "data_size": 63488 00:18:46.119 } 00:18:46.119 ] 00:18:46.119 }' 00:18:46.119 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:46.119 10:33:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.685 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.685 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:46.943 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:46.944 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:47.202 [2024-07-25 10:33:50.737859] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.202 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:47.202 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:47.202 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:47.202 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:47.202 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:47.202 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:47.202 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:47.202 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:47.202 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:47.202 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:47.202 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.202 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.460 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.460 "name": "Existed_Raid", 00:18:47.460 "uuid": "b823c579-617b-4b27-8fde-750ad4afceaa", 00:18:47.460 "strip_size_kb": 64, 00:18:47.460 "state": "configuring", 00:18:47.460 "raid_level": "concat", 00:18:47.460 "superblock": true, 00:18:47.460 "num_base_bdevs": 4, 00:18:47.460 "num_base_bdevs_discovered": 3, 00:18:47.460 "num_base_bdevs_operational": 4, 00:18:47.460 "base_bdevs_list": [ 00:18:47.460 { 00:18:47.460 "name": null, 00:18:47.460 "uuid": "60f42eb5-4eeb-46a2-9f46-b2952d39f6bc", 00:18:47.460 "is_configured": false, 00:18:47.460 "data_offset": 2048, 00:18:47.460 "data_size": 63488 00:18:47.460 }, 00:18:47.460 { 00:18:47.460 "name": "BaseBdev2", 00:18:47.460 "uuid": "b8323100-f3fb-4536-9b82-dced183491a9", 00:18:47.460 "is_configured": true, 00:18:47.460 "data_offset": 2048, 00:18:47.460 "data_size": 63488 00:18:47.460 }, 00:18:47.460 { 00:18:47.460 "name": "BaseBdev3", 00:18:47.460 "uuid": "7e43116f-6f28-4d2e-8a06-442b815fbf96", 00:18:47.460 "is_configured": true, 00:18:47.460 "data_offset": 2048, 00:18:47.460 "data_size": 63488 00:18:47.460 }, 00:18:47.460 { 00:18:47.460 "name": "BaseBdev4", 00:18:47.460 "uuid": "5d2e6847-5aa7-4c17-a84e-e363e7474edd", 00:18:47.460 "is_configured": true, 00:18:47.460 "data_offset": 2048, 00:18:47.460 "data_size": 63488 00:18:47.460 } 00:18:47.460 ] 00:18:47.460 }' 00:18:47.460 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.460 10:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.025 10:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.025 10:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:48.283 10:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:48.283 10:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.283 10:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:48.541 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 60f42eb5-4eeb-46a2-9f46-b2952d39f6bc 00:18:48.799 [2024-07-25 10:33:52.328236] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:48.799 [2024-07-25 10:33:52.328468] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xd783a0 00:18:48.799 [2024-07-25 10:33:52.328486] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:48.799 [2024-07-25 10:33:52.328674] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd78680 00:18:48.799 [2024-07-25 10:33:52.328819] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd783a0 00:18:48.799 [2024-07-25 10:33:52.328835] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xd783a0 00:18:48.799 [2024-07-25 10:33:52.328943] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.799 NewBaseBdev 00:18:48.799 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:48.799 10:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:48.799 10:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:48.799 10:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:48.799 10:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:48.799 10:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:48.799 10:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:49.057 10:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:49.316 [ 00:18:49.316 { 00:18:49.316 "name": "NewBaseBdev", 00:18:49.316 "aliases": [ 00:18:49.316 "60f42eb5-4eeb-46a2-9f46-b2952d39f6bc" 00:18:49.316 ], 00:18:49.316 "product_name": "Malloc disk", 00:18:49.316 "block_size": 512, 00:18:49.316 "num_blocks": 65536, 00:18:49.316 "uuid": "60f42eb5-4eeb-46a2-9f46-b2952d39f6bc", 00:18:49.316 "assigned_rate_limits": { 00:18:49.316 "rw_ios_per_sec": 0, 00:18:49.316 "rw_mbytes_per_sec": 0, 00:18:49.316 "r_mbytes_per_sec": 0, 00:18:49.316 "w_mbytes_per_sec": 0 00:18:49.316 }, 00:18:49.316 "claimed": true, 00:18:49.316 "claim_type": "exclusive_write", 00:18:49.316 "zoned": false, 00:18:49.316 "supported_io_types": { 00:18:49.316 "read": true, 00:18:49.316 "write": true, 00:18:49.316 "unmap": true, 00:18:49.316 "flush": true, 00:18:49.316 "reset": true, 00:18:49.316 "nvme_admin": false, 00:18:49.316 "nvme_io": false, 00:18:49.316 "nvme_io_md": false, 00:18:49.316 "write_zeroes": true, 00:18:49.316 "zcopy": true, 00:18:49.316 "get_zone_info": false, 00:18:49.316 "zone_management": false, 00:18:49.316 "zone_append": false, 00:18:49.316 "compare": false, 00:18:49.316 "compare_and_write": false, 00:18:49.316 "abort": true, 00:18:49.316 "seek_hole": false, 00:18:49.316 "seek_data": false, 00:18:49.316 "copy": true, 00:18:49.316 "nvme_iov_md": false 00:18:49.316 }, 00:18:49.316 "memory_domains": [ 00:18:49.316 { 00:18:49.316 "dma_device_id": "system", 00:18:49.316 "dma_device_type": 1 00:18:49.316 }, 00:18:49.316 { 00:18:49.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.316 "dma_device_type": 2 00:18:49.316 } 00:18:49.316 ], 00:18:49.316 "driver_specific": {} 00:18:49.316 } 00:18:49.316 ] 00:18:49.316 10:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:49.316 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:49.316 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:49.316 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:49.316 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:49.316 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:49.316 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:49.316 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.316 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.316 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.316 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.316 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.316 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.574 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.574 "name": "Existed_Raid", 00:18:49.574 "uuid": "b823c579-617b-4b27-8fde-750ad4afceaa", 00:18:49.574 "strip_size_kb": 64, 00:18:49.574 "state": "online", 00:18:49.574 "raid_level": "concat", 00:18:49.574 "superblock": true, 00:18:49.574 "num_base_bdevs": 4, 00:18:49.574 "num_base_bdevs_discovered": 4, 00:18:49.574 "num_base_bdevs_operational": 4, 00:18:49.574 "base_bdevs_list": [ 00:18:49.574 { 00:18:49.574 "name": "NewBaseBdev", 00:18:49.574 "uuid": "60f42eb5-4eeb-46a2-9f46-b2952d39f6bc", 00:18:49.574 "is_configured": true, 00:18:49.574 "data_offset": 2048, 00:18:49.574 "data_size": 63488 00:18:49.574 }, 00:18:49.574 { 00:18:49.574 "name": "BaseBdev2", 00:18:49.574 "uuid": "b8323100-f3fb-4536-9b82-dced183491a9", 00:18:49.574 "is_configured": true, 00:18:49.574 "data_offset": 2048, 00:18:49.574 "data_size": 63488 00:18:49.574 }, 00:18:49.574 { 00:18:49.574 "name": "BaseBdev3", 00:18:49.574 "uuid": "7e43116f-6f28-4d2e-8a06-442b815fbf96", 00:18:49.574 "is_configured": true, 00:18:49.574 "data_offset": 2048, 00:18:49.574 "data_size": 63488 00:18:49.574 }, 00:18:49.574 { 00:18:49.574 "name": "BaseBdev4", 00:18:49.574 "uuid": "5d2e6847-5aa7-4c17-a84e-e363e7474edd", 00:18:49.574 "is_configured": true, 00:18:49.574 "data_offset": 2048, 00:18:49.574 "data_size": 63488 00:18:49.574 } 00:18:49.574 ] 00:18:49.574 }' 00:18:49.574 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.574 10:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.138 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:50.138 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:50.138 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:50.138 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:50.139 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:50.139 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:50.139 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:50.139 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:50.397 [2024-07-25 10:33:53.980988] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.397 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:50.397 "name": "Existed_Raid", 00:18:50.397 "aliases": [ 00:18:50.397 "b823c579-617b-4b27-8fde-750ad4afceaa" 00:18:50.397 ], 00:18:50.397 "product_name": "Raid Volume", 00:18:50.397 "block_size": 512, 00:18:50.397 "num_blocks": 253952, 00:18:50.397 "uuid": "b823c579-617b-4b27-8fde-750ad4afceaa", 00:18:50.397 "assigned_rate_limits": { 00:18:50.397 "rw_ios_per_sec": 0, 00:18:50.397 "rw_mbytes_per_sec": 0, 00:18:50.397 "r_mbytes_per_sec": 0, 00:18:50.397 "w_mbytes_per_sec": 0 00:18:50.397 }, 00:18:50.397 "claimed": false, 00:18:50.397 "zoned": false, 00:18:50.397 "supported_io_types": { 00:18:50.397 "read": true, 00:18:50.397 "write": true, 00:18:50.397 "unmap": true, 00:18:50.397 "flush": true, 00:18:50.397 "reset": true, 00:18:50.397 "nvme_admin": false, 00:18:50.397 "nvme_io": false, 00:18:50.397 "nvme_io_md": false, 00:18:50.397 "write_zeroes": true, 00:18:50.397 "zcopy": false, 00:18:50.397 "get_zone_info": false, 00:18:50.397 "zone_management": false, 00:18:50.397 "zone_append": false, 00:18:50.397 "compare": false, 00:18:50.397 "compare_and_write": false, 00:18:50.397 "abort": false, 00:18:50.397 "seek_hole": false, 00:18:50.397 "seek_data": false, 00:18:50.397 "copy": false, 00:18:50.397 "nvme_iov_md": false 00:18:50.397 }, 00:18:50.397 "memory_domains": [ 00:18:50.397 { 00:18:50.397 "dma_device_id": "system", 00:18:50.397 "dma_device_type": 1 00:18:50.397 }, 00:18:50.397 { 00:18:50.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.397 "dma_device_type": 2 00:18:50.397 }, 00:18:50.397 { 00:18:50.397 "dma_device_id": "system", 00:18:50.397 "dma_device_type": 1 00:18:50.397 }, 00:18:50.397 { 00:18:50.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.397 "dma_device_type": 2 00:18:50.397 }, 00:18:50.397 { 00:18:50.397 "dma_device_id": "system", 00:18:50.397 "dma_device_type": 1 00:18:50.397 }, 00:18:50.397 { 00:18:50.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.397 "dma_device_type": 2 00:18:50.397 }, 00:18:50.397 { 00:18:50.397 "dma_device_id": "system", 00:18:50.397 "dma_device_type": 1 00:18:50.397 }, 00:18:50.397 { 00:18:50.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.397 "dma_device_type": 2 00:18:50.397 } 00:18:50.397 ], 00:18:50.397 "driver_specific": { 00:18:50.397 "raid": { 00:18:50.397 "uuid": "b823c579-617b-4b27-8fde-750ad4afceaa", 00:18:50.397 "strip_size_kb": 64, 00:18:50.397 "state": "online", 00:18:50.397 "raid_level": "concat", 00:18:50.397 "superblock": true, 00:18:50.397 "num_base_bdevs": 4, 00:18:50.397 "num_base_bdevs_discovered": 4, 00:18:50.397 "num_base_bdevs_operational": 4, 00:18:50.397 "base_bdevs_list": [ 00:18:50.397 { 00:18:50.397 "name": "NewBaseBdev", 00:18:50.397 "uuid": "60f42eb5-4eeb-46a2-9f46-b2952d39f6bc", 00:18:50.397 "is_configured": true, 00:18:50.397 "data_offset": 2048, 00:18:50.397 "data_size": 63488 00:18:50.397 }, 00:18:50.397 { 00:18:50.397 "name": "BaseBdev2", 00:18:50.397 "uuid": "b8323100-f3fb-4536-9b82-dced183491a9", 00:18:50.397 "is_configured": true, 00:18:50.397 "data_offset": 2048, 00:18:50.397 "data_size": 63488 00:18:50.397 }, 00:18:50.397 { 00:18:50.397 "name": "BaseBdev3", 00:18:50.397 "uuid": "7e43116f-6f28-4d2e-8a06-442b815fbf96", 00:18:50.397 "is_configured": true, 00:18:50.397 "data_offset": 2048, 00:18:50.397 "data_size": 63488 00:18:50.397 }, 00:18:50.397 { 00:18:50.397 "name": "BaseBdev4", 00:18:50.397 "uuid": "5d2e6847-5aa7-4c17-a84e-e363e7474edd", 00:18:50.397 "is_configured": true, 00:18:50.397 "data_offset": 2048, 00:18:50.397 "data_size": 63488 00:18:50.397 } 00:18:50.397 ] 00:18:50.397 } 00:18:50.397 } 00:18:50.397 }' 00:18:50.397 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:50.397 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:50.397 BaseBdev2 00:18:50.397 BaseBdev3 00:18:50.397 BaseBdev4' 00:18:50.397 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:50.397 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:50.398 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:50.656 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:50.656 "name": "NewBaseBdev", 00:18:50.656 "aliases": [ 00:18:50.656 "60f42eb5-4eeb-46a2-9f46-b2952d39f6bc" 00:18:50.656 ], 00:18:50.656 "product_name": "Malloc disk", 00:18:50.656 "block_size": 512, 00:18:50.656 "num_blocks": 65536, 00:18:50.656 "uuid": "60f42eb5-4eeb-46a2-9f46-b2952d39f6bc", 00:18:50.656 "assigned_rate_limits": { 00:18:50.656 "rw_ios_per_sec": 0, 00:18:50.656 "rw_mbytes_per_sec": 0, 00:18:50.656 "r_mbytes_per_sec": 0, 00:18:50.656 "w_mbytes_per_sec": 0 00:18:50.656 }, 00:18:50.656 "claimed": true, 00:18:50.656 "claim_type": "exclusive_write", 00:18:50.656 "zoned": false, 00:18:50.656 "supported_io_types": { 00:18:50.656 "read": true, 00:18:50.656 "write": true, 00:18:50.656 "unmap": true, 00:18:50.656 "flush": true, 00:18:50.656 "reset": true, 00:18:50.656 "nvme_admin": false, 00:18:50.656 "nvme_io": false, 00:18:50.656 "nvme_io_md": false, 00:18:50.656 "write_zeroes": true, 00:18:50.656 "zcopy": true, 00:18:50.656 "get_zone_info": false, 00:18:50.656 "zone_management": false, 00:18:50.656 "zone_append": false, 00:18:50.656 "compare": false, 00:18:50.656 "compare_and_write": false, 00:18:50.656 "abort": true, 00:18:50.656 "seek_hole": false, 00:18:50.656 "seek_data": false, 00:18:50.656 "copy": true, 00:18:50.656 "nvme_iov_md": false 00:18:50.656 }, 00:18:50.656 "memory_domains": [ 00:18:50.656 { 00:18:50.656 "dma_device_id": "system", 00:18:50.656 "dma_device_type": 1 00:18:50.656 }, 00:18:50.656 { 00:18:50.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.656 "dma_device_type": 2 00:18:50.656 } 00:18:50.656 ], 00:18:50.656 "driver_specific": {} 00:18:50.656 }' 00:18:50.656 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:50.913 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:50.913 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:50.913 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:50.913 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:50.913 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:50.913 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:50.913 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:50.913 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:50.913 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:50.913 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.172 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.172 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:51.172 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:51.172 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:51.172 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:51.172 "name": "BaseBdev2", 00:18:51.172 "aliases": [ 00:18:51.172 "b8323100-f3fb-4536-9b82-dced183491a9" 00:18:51.172 ], 00:18:51.172 "product_name": "Malloc disk", 00:18:51.172 "block_size": 512, 00:18:51.172 "num_blocks": 65536, 00:18:51.172 "uuid": "b8323100-f3fb-4536-9b82-dced183491a9", 00:18:51.172 "assigned_rate_limits": { 00:18:51.172 "rw_ios_per_sec": 0, 00:18:51.172 "rw_mbytes_per_sec": 0, 00:18:51.172 "r_mbytes_per_sec": 0, 00:18:51.172 "w_mbytes_per_sec": 0 00:18:51.172 }, 00:18:51.172 "claimed": true, 00:18:51.172 "claim_type": "exclusive_write", 00:18:51.172 "zoned": false, 00:18:51.172 "supported_io_types": { 00:18:51.172 "read": true, 00:18:51.172 "write": true, 00:18:51.172 "unmap": true, 00:18:51.172 "flush": true, 00:18:51.172 "reset": true, 00:18:51.172 "nvme_admin": false, 00:18:51.172 "nvme_io": false, 00:18:51.172 "nvme_io_md": false, 00:18:51.172 "write_zeroes": true, 00:18:51.172 "zcopy": true, 00:18:51.172 "get_zone_info": false, 00:18:51.172 "zone_management": false, 00:18:51.172 "zone_append": false, 00:18:51.172 "compare": false, 00:18:51.172 "compare_and_write": false, 00:18:51.172 "abort": true, 00:18:51.172 "seek_hole": false, 00:18:51.172 "seek_data": false, 00:18:51.172 "copy": true, 00:18:51.172 "nvme_iov_md": false 00:18:51.172 }, 00:18:51.172 "memory_domains": [ 00:18:51.172 { 00:18:51.172 "dma_device_id": "system", 00:18:51.172 "dma_device_type": 1 00:18:51.172 }, 00:18:51.172 { 00:18:51.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.172 "dma_device_type": 2 00:18:51.172 } 00:18:51.172 ], 00:18:51.172 "driver_specific": {} 00:18:51.172 }' 00:18:51.430 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.430 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.430 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:51.430 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.430 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.430 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:51.430 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.430 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.430 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.430 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.688 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.688 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.688 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:51.688 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:51.688 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:51.947 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:51.947 "name": "BaseBdev3", 00:18:51.947 "aliases": [ 00:18:51.947 "7e43116f-6f28-4d2e-8a06-442b815fbf96" 00:18:51.947 ], 00:18:51.947 "product_name": "Malloc disk", 00:18:51.947 "block_size": 512, 00:18:51.947 "num_blocks": 65536, 00:18:51.947 "uuid": "7e43116f-6f28-4d2e-8a06-442b815fbf96", 00:18:51.947 "assigned_rate_limits": { 00:18:51.947 "rw_ios_per_sec": 0, 00:18:51.947 "rw_mbytes_per_sec": 0, 00:18:51.947 "r_mbytes_per_sec": 0, 00:18:51.947 "w_mbytes_per_sec": 0 00:18:51.947 }, 00:18:51.947 "claimed": true, 00:18:51.947 "claim_type": "exclusive_write", 00:18:51.947 "zoned": false, 00:18:51.947 "supported_io_types": { 00:18:51.947 "read": true, 00:18:51.947 "write": true, 00:18:51.947 "unmap": true, 00:18:51.947 "flush": true, 00:18:51.947 "reset": true, 00:18:51.947 "nvme_admin": false, 00:18:51.947 "nvme_io": false, 00:18:51.947 "nvme_io_md": false, 00:18:51.947 "write_zeroes": true, 00:18:51.947 "zcopy": true, 00:18:51.947 "get_zone_info": false, 00:18:51.947 "zone_management": false, 00:18:51.947 "zone_append": false, 00:18:51.947 "compare": false, 00:18:51.947 "compare_and_write": false, 00:18:51.947 "abort": true, 00:18:51.947 "seek_hole": false, 00:18:51.947 "seek_data": false, 00:18:51.947 "copy": true, 00:18:51.947 "nvme_iov_md": false 00:18:51.947 }, 00:18:51.947 "memory_domains": [ 00:18:51.947 { 00:18:51.947 "dma_device_id": "system", 00:18:51.947 "dma_device_type": 1 00:18:51.947 }, 00:18:51.947 { 00:18:51.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.947 "dma_device_type": 2 00:18:51.947 } 00:18:51.947 ], 00:18:51.947 "driver_specific": {} 00:18:51.947 }' 00:18:51.947 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.947 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.947 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:51.947 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.947 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.947 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:51.947 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.947 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:52.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:52.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:52.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:52.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:52.463 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:52.463 "name": "BaseBdev4", 00:18:52.463 "aliases": [ 00:18:52.463 "5d2e6847-5aa7-4c17-a84e-e363e7474edd" 00:18:52.463 ], 00:18:52.463 "product_name": "Malloc disk", 00:18:52.463 "block_size": 512, 00:18:52.463 "num_blocks": 65536, 00:18:52.463 "uuid": "5d2e6847-5aa7-4c17-a84e-e363e7474edd", 00:18:52.463 "assigned_rate_limits": { 00:18:52.463 "rw_ios_per_sec": 0, 00:18:52.463 "rw_mbytes_per_sec": 0, 00:18:52.463 "r_mbytes_per_sec": 0, 00:18:52.463 "w_mbytes_per_sec": 0 00:18:52.463 }, 00:18:52.463 "claimed": true, 00:18:52.463 "claim_type": "exclusive_write", 00:18:52.463 "zoned": false, 00:18:52.463 "supported_io_types": { 00:18:52.463 "read": true, 00:18:52.463 "write": true, 00:18:52.463 "unmap": true, 00:18:52.463 "flush": true, 00:18:52.463 "reset": true, 00:18:52.463 "nvme_admin": false, 00:18:52.463 "nvme_io": false, 00:18:52.463 "nvme_io_md": false, 00:18:52.463 "write_zeroes": true, 00:18:52.463 "zcopy": true, 00:18:52.463 "get_zone_info": false, 00:18:52.463 "zone_management": false, 00:18:52.463 "zone_append": false, 00:18:52.463 "compare": false, 00:18:52.463 "compare_and_write": false, 00:18:52.463 "abort": true, 00:18:52.463 "seek_hole": false, 00:18:52.463 "seek_data": false, 00:18:52.463 "copy": true, 00:18:52.463 "nvme_iov_md": false 00:18:52.463 }, 00:18:52.463 "memory_domains": [ 00:18:52.463 { 00:18:52.463 "dma_device_id": "system", 00:18:52.463 "dma_device_type": 1 00:18:52.463 }, 00:18:52.463 { 00:18:52.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.463 "dma_device_type": 2 00:18:52.463 } 00:18:52.463 ], 00:18:52.463 "driver_specific": {} 00:18:52.463 }' 00:18:52.463 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.463 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.463 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:52.463 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.463 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.463 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:52.463 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.721 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.721 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:52.721 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.721 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.721 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:52.721 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:52.979 [2024-07-25 10:33:56.523457] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:52.979 [2024-07-25 10:33:56.523512] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.979 [2024-07-25 10:33:56.523603] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.979 [2024-07-25 10:33:56.523677] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.979 [2024-07-25 10:33:56.523692] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd783a0 name Existed_Raid, state offline 00:18:52.979 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2406831 00:18:52.979 10:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2406831 ']' 00:18:52.979 10:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2406831 00:18:52.979 10:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:52.979 10:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:52.979 10:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2406831 00:18:52.979 10:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:52.979 10:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:52.979 10:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2406831' 00:18:52.979 killing process with pid 2406831 00:18:52.979 10:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2406831 00:18:52.979 [2024-07-25 10:33:56.570232] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:52.979 10:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2406831 00:18:52.979 [2024-07-25 10:33:56.619316] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:53.237 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:53.237 00:18:53.237 real 0m32.564s 00:18:53.237 user 1m0.674s 00:18:53.237 sys 0m4.487s 00:18:53.237 10:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:53.237 10:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.237 ************************************ 00:18:53.237 END TEST raid_state_function_test_sb 00:18:53.237 ************************************ 00:18:53.237 10:33:56 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:53.237 10:33:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:53.237 10:33:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:53.237 10:33:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.237 ************************************ 00:18:53.237 START TEST raid_superblock_test 00:18:53.237 ************************************ 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2411344 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2411344 /var/tmp/spdk-raid.sock 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2411344 ']' 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:53.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:53.238 10:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.499 [2024-07-25 10:33:56.988290] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:18:53.499 [2024-07-25 10:33:56.988363] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411344 ] 00:18:53.499 [2024-07-25 10:33:57.065189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.499 [2024-07-25 10:33:57.183375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.756 [2024-07-25 10:33:57.258370] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.756 [2024-07-25 10:33:57.258415] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.320 10:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:54.320 10:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:54.320 10:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:54.320 10:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:54.320 10:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:54.320 10:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:54.320 10:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:54.320 10:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:54.320 10:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:54.320 10:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:54.320 10:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:54.579 malloc1 00:18:54.579 10:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:54.837 [2024-07-25 10:33:58.462752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:54.837 [2024-07-25 10:33:58.462809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.837 [2024-07-25 10:33:58.462834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22992b0 00:18:54.837 [2024-07-25 10:33:58.462850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.837 [2024-07-25 10:33:58.464502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.837 [2024-07-25 10:33:58.464536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:54.837 pt1 00:18:54.837 10:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:54.837 10:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:54.837 10:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:54.837 10:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:54.837 10:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:54.837 10:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:54.837 10:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:54.837 10:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:54.837 10:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:55.095 malloc2 00:18:55.095 10:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:55.352 [2024-07-25 10:33:59.012459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:55.352 [2024-07-25 10:33:59.012518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.352 [2024-07-25 10:33:59.012539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x244c1e0 00:18:55.352 [2024-07-25 10:33:59.012553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.352 [2024-07-25 10:33:59.013934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.352 [2024-07-25 10:33:59.013961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:55.352 pt2 00:18:55.352 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:55.352 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:55.352 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:18:55.352 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:18:55.352 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:55.352 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:55.352 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:55.352 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:55.352 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:55.610 malloc3 00:18:55.610 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:55.868 [2024-07-25 10:33:59.513662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:55.868 [2024-07-25 10:33:59.513719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.868 [2024-07-25 10:33:59.513741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24324d0 00:18:55.868 [2024-07-25 10:33:59.513755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.868 [2024-07-25 10:33:59.515193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.868 [2024-07-25 10:33:59.515221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:55.868 pt3 00:18:55.868 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:55.868 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:55.868 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:18:55.868 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:18:55.868 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:55.868 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:55.868 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:55.868 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:55.868 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:56.125 malloc4 00:18:56.126 10:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:56.383 [2024-07-25 10:34:00.021928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:56.383 [2024-07-25 10:34:00.021988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.383 [2024-07-25 10:34:00.022009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2290e30 00:18:56.383 [2024-07-25 10:34:00.022024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.383 [2024-07-25 10:34:00.023449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.383 [2024-07-25 10:34:00.023476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:56.383 pt4 00:18:56.383 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:56.383 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:56.383 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:56.641 [2024-07-25 10:34:00.262629] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:56.641 [2024-07-25 10:34:00.264062] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:56.641 [2024-07-25 10:34:00.264139] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:56.641 [2024-07-25 10:34:00.264197] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:56.641 [2024-07-25 10:34:00.264419] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2291980 00:18:56.641 [2024-07-25 10:34:00.264436] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:56.641 [2024-07-25 10:34:00.264674] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22b01a0 00:18:56.641 [2024-07-25 10:34:00.264871] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2291980 00:18:56.641 [2024-07-25 10:34:00.264887] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2291980 00:18:56.641 [2024-07-25 10:34:00.265020] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.641 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:56.641 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:56.641 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:56.641 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:56.641 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:56.641 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:56.641 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:56.641 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:56.641 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:56.641 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:56.641 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.641 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.898 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:56.899 "name": "raid_bdev1", 00:18:56.899 "uuid": "9c0922ba-02aa-4540-a68b-129c17c84afa", 00:18:56.899 "strip_size_kb": 64, 00:18:56.899 "state": "online", 00:18:56.899 "raid_level": "concat", 00:18:56.899 "superblock": true, 00:18:56.899 "num_base_bdevs": 4, 00:18:56.899 "num_base_bdevs_discovered": 4, 00:18:56.899 "num_base_bdevs_operational": 4, 00:18:56.899 "base_bdevs_list": [ 00:18:56.899 { 00:18:56.899 "name": "pt1", 00:18:56.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:56.899 "is_configured": true, 00:18:56.899 "data_offset": 2048, 00:18:56.899 "data_size": 63488 00:18:56.899 }, 00:18:56.899 { 00:18:56.899 "name": "pt2", 00:18:56.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:56.899 "is_configured": true, 00:18:56.899 "data_offset": 2048, 00:18:56.899 "data_size": 63488 00:18:56.899 }, 00:18:56.899 { 00:18:56.899 "name": "pt3", 00:18:56.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:56.899 "is_configured": true, 00:18:56.899 "data_offset": 2048, 00:18:56.899 "data_size": 63488 00:18:56.899 }, 00:18:56.899 { 00:18:56.899 "name": "pt4", 00:18:56.899 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:56.899 "is_configured": true, 00:18:56.899 "data_offset": 2048, 00:18:56.899 "data_size": 63488 00:18:56.899 } 00:18:56.899 ] 00:18:56.899 }' 00:18:56.899 10:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:56.899 10:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.463 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:57.463 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:57.463 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:57.463 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:57.463 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:57.463 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:57.463 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:57.463 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:57.721 [2024-07-25 10:34:01.333720] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.721 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:57.721 "name": "raid_bdev1", 00:18:57.721 "aliases": [ 00:18:57.721 "9c0922ba-02aa-4540-a68b-129c17c84afa" 00:18:57.721 ], 00:18:57.721 "product_name": "Raid Volume", 00:18:57.721 "block_size": 512, 00:18:57.721 "num_blocks": 253952, 00:18:57.721 "uuid": "9c0922ba-02aa-4540-a68b-129c17c84afa", 00:18:57.721 "assigned_rate_limits": { 00:18:57.721 "rw_ios_per_sec": 0, 00:18:57.721 "rw_mbytes_per_sec": 0, 00:18:57.721 "r_mbytes_per_sec": 0, 00:18:57.721 "w_mbytes_per_sec": 0 00:18:57.721 }, 00:18:57.721 "claimed": false, 00:18:57.721 "zoned": false, 00:18:57.721 "supported_io_types": { 00:18:57.721 "read": true, 00:18:57.721 "write": true, 00:18:57.721 "unmap": true, 00:18:57.721 "flush": true, 00:18:57.721 "reset": true, 00:18:57.721 "nvme_admin": false, 00:18:57.721 "nvme_io": false, 00:18:57.721 "nvme_io_md": false, 00:18:57.721 "write_zeroes": true, 00:18:57.721 "zcopy": false, 00:18:57.721 "get_zone_info": false, 00:18:57.721 "zone_management": false, 00:18:57.721 "zone_append": false, 00:18:57.721 "compare": false, 00:18:57.721 "compare_and_write": false, 00:18:57.721 "abort": false, 00:18:57.721 "seek_hole": false, 00:18:57.721 "seek_data": false, 00:18:57.721 "copy": false, 00:18:57.721 "nvme_iov_md": false 00:18:57.721 }, 00:18:57.721 "memory_domains": [ 00:18:57.721 { 00:18:57.721 "dma_device_id": "system", 00:18:57.721 "dma_device_type": 1 00:18:57.721 }, 00:18:57.721 { 00:18:57.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.721 "dma_device_type": 2 00:18:57.721 }, 00:18:57.721 { 00:18:57.721 "dma_device_id": "system", 00:18:57.721 "dma_device_type": 1 00:18:57.721 }, 00:18:57.721 { 00:18:57.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.721 "dma_device_type": 2 00:18:57.721 }, 00:18:57.721 { 00:18:57.721 "dma_device_id": "system", 00:18:57.721 "dma_device_type": 1 00:18:57.721 }, 00:18:57.721 { 00:18:57.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.721 "dma_device_type": 2 00:18:57.721 }, 00:18:57.721 { 00:18:57.721 "dma_device_id": "system", 00:18:57.721 "dma_device_type": 1 00:18:57.721 }, 00:18:57.721 { 00:18:57.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.721 "dma_device_type": 2 00:18:57.721 } 00:18:57.721 ], 00:18:57.721 "driver_specific": { 00:18:57.721 "raid": { 00:18:57.721 "uuid": "9c0922ba-02aa-4540-a68b-129c17c84afa", 00:18:57.721 "strip_size_kb": 64, 00:18:57.721 "state": "online", 00:18:57.721 "raid_level": "concat", 00:18:57.721 "superblock": true, 00:18:57.721 "num_base_bdevs": 4, 00:18:57.721 "num_base_bdevs_discovered": 4, 00:18:57.721 "num_base_bdevs_operational": 4, 00:18:57.721 "base_bdevs_list": [ 00:18:57.721 { 00:18:57.721 "name": "pt1", 00:18:57.721 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:57.721 "is_configured": true, 00:18:57.721 "data_offset": 2048, 00:18:57.721 "data_size": 63488 00:18:57.721 }, 00:18:57.721 { 00:18:57.721 "name": "pt2", 00:18:57.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.721 "is_configured": true, 00:18:57.721 "data_offset": 2048, 00:18:57.721 "data_size": 63488 00:18:57.721 }, 00:18:57.721 { 00:18:57.721 "name": "pt3", 00:18:57.721 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:57.721 "is_configured": true, 00:18:57.721 "data_offset": 2048, 00:18:57.721 "data_size": 63488 00:18:57.721 }, 00:18:57.721 { 00:18:57.721 "name": "pt4", 00:18:57.721 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:57.721 "is_configured": true, 00:18:57.721 "data_offset": 2048, 00:18:57.721 "data_size": 63488 00:18:57.721 } 00:18:57.721 ] 00:18:57.721 } 00:18:57.721 } 00:18:57.721 }' 00:18:57.721 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:57.721 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:57.721 pt2 00:18:57.721 pt3 00:18:57.721 pt4' 00:18:57.721 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:57.721 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:57.721 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:57.979 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:57.979 "name": "pt1", 00:18:57.979 "aliases": [ 00:18:57.979 "00000000-0000-0000-0000-000000000001" 00:18:57.979 ], 00:18:57.979 "product_name": "passthru", 00:18:57.979 "block_size": 512, 00:18:57.979 "num_blocks": 65536, 00:18:57.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:57.979 "assigned_rate_limits": { 00:18:57.979 "rw_ios_per_sec": 0, 00:18:57.979 "rw_mbytes_per_sec": 0, 00:18:57.979 "r_mbytes_per_sec": 0, 00:18:57.979 "w_mbytes_per_sec": 0 00:18:57.979 }, 00:18:57.979 "claimed": true, 00:18:57.979 "claim_type": "exclusive_write", 00:18:57.979 "zoned": false, 00:18:57.979 "supported_io_types": { 00:18:57.979 "read": true, 00:18:57.979 "write": true, 00:18:57.979 "unmap": true, 00:18:57.979 "flush": true, 00:18:57.979 "reset": true, 00:18:57.979 "nvme_admin": false, 00:18:57.979 "nvme_io": false, 00:18:57.979 "nvme_io_md": false, 00:18:57.979 "write_zeroes": true, 00:18:57.979 "zcopy": true, 00:18:57.979 "get_zone_info": false, 00:18:57.979 "zone_management": false, 00:18:57.979 "zone_append": false, 00:18:57.979 "compare": false, 00:18:57.979 "compare_and_write": false, 00:18:57.979 "abort": true, 00:18:57.979 "seek_hole": false, 00:18:57.979 "seek_data": false, 00:18:57.979 "copy": true, 00:18:57.979 "nvme_iov_md": false 00:18:57.979 }, 00:18:57.979 "memory_domains": [ 00:18:57.979 { 00:18:57.979 "dma_device_id": "system", 00:18:57.979 "dma_device_type": 1 00:18:57.979 }, 00:18:57.979 { 00:18:57.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.979 "dma_device_type": 2 00:18:57.979 } 00:18:57.979 ], 00:18:57.979 "driver_specific": { 00:18:57.979 "passthru": { 00:18:57.979 "name": "pt1", 00:18:57.979 "base_bdev_name": "malloc1" 00:18:57.979 } 00:18:57.979 } 00:18:57.979 }' 00:18:57.979 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.979 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.237 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:58.237 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.237 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.237 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:58.237 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.237 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.237 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:58.237 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.237 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.237 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:58.237 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:58.237 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:58.237 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:58.494 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:58.494 "name": "pt2", 00:18:58.494 "aliases": [ 00:18:58.494 "00000000-0000-0000-0000-000000000002" 00:18:58.494 ], 00:18:58.494 "product_name": "passthru", 00:18:58.494 "block_size": 512, 00:18:58.494 "num_blocks": 65536, 00:18:58.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.494 "assigned_rate_limits": { 00:18:58.494 "rw_ios_per_sec": 0, 00:18:58.494 "rw_mbytes_per_sec": 0, 00:18:58.494 "r_mbytes_per_sec": 0, 00:18:58.494 "w_mbytes_per_sec": 0 00:18:58.494 }, 00:18:58.494 "claimed": true, 00:18:58.494 "claim_type": "exclusive_write", 00:18:58.494 "zoned": false, 00:18:58.494 "supported_io_types": { 00:18:58.494 "read": true, 00:18:58.494 "write": true, 00:18:58.494 "unmap": true, 00:18:58.494 "flush": true, 00:18:58.494 "reset": true, 00:18:58.494 "nvme_admin": false, 00:18:58.494 "nvme_io": false, 00:18:58.494 "nvme_io_md": false, 00:18:58.494 "write_zeroes": true, 00:18:58.494 "zcopy": true, 00:18:58.494 "get_zone_info": false, 00:18:58.494 "zone_management": false, 00:18:58.495 "zone_append": false, 00:18:58.495 "compare": false, 00:18:58.495 "compare_and_write": false, 00:18:58.495 "abort": true, 00:18:58.495 "seek_hole": false, 00:18:58.495 "seek_data": false, 00:18:58.495 "copy": true, 00:18:58.495 "nvme_iov_md": false 00:18:58.495 }, 00:18:58.495 "memory_domains": [ 00:18:58.495 { 00:18:58.495 "dma_device_id": "system", 00:18:58.495 "dma_device_type": 1 00:18:58.495 }, 00:18:58.495 { 00:18:58.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.495 "dma_device_type": 2 00:18:58.495 } 00:18:58.495 ], 00:18:58.495 "driver_specific": { 00:18:58.495 "passthru": { 00:18:58.495 "name": "pt2", 00:18:58.495 "base_bdev_name": "malloc2" 00:18:58.495 } 00:18:58.495 } 00:18:58.495 }' 00:18:58.495 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:58.752 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:59.010 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:59.010 "name": "pt3", 00:18:59.010 "aliases": [ 00:18:59.010 "00000000-0000-0000-0000-000000000003" 00:18:59.010 ], 00:18:59.010 "product_name": "passthru", 00:18:59.010 "block_size": 512, 00:18:59.010 "num_blocks": 65536, 00:18:59.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:59.010 "assigned_rate_limits": { 00:18:59.010 "rw_ios_per_sec": 0, 00:18:59.010 "rw_mbytes_per_sec": 0, 00:18:59.010 "r_mbytes_per_sec": 0, 00:18:59.010 "w_mbytes_per_sec": 0 00:18:59.010 }, 00:18:59.010 "claimed": true, 00:18:59.010 "claim_type": "exclusive_write", 00:18:59.010 "zoned": false, 00:18:59.010 "supported_io_types": { 00:18:59.010 "read": true, 00:18:59.010 "write": true, 00:18:59.010 "unmap": true, 00:18:59.010 "flush": true, 00:18:59.010 "reset": true, 00:18:59.010 "nvme_admin": false, 00:18:59.010 "nvme_io": false, 00:18:59.010 "nvme_io_md": false, 00:18:59.010 "write_zeroes": true, 00:18:59.010 "zcopy": true, 00:18:59.010 "get_zone_info": false, 00:18:59.010 "zone_management": false, 00:18:59.010 "zone_append": false, 00:18:59.010 "compare": false, 00:18:59.010 "compare_and_write": false, 00:18:59.010 "abort": true, 00:18:59.010 "seek_hole": false, 00:18:59.010 "seek_data": false, 00:18:59.010 "copy": true, 00:18:59.010 "nvme_iov_md": false 00:18:59.010 }, 00:18:59.010 "memory_domains": [ 00:18:59.010 { 00:18:59.010 "dma_device_id": "system", 00:18:59.010 "dma_device_type": 1 00:18:59.010 }, 00:18:59.010 { 00:18:59.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.010 "dma_device_type": 2 00:18:59.010 } 00:18:59.010 ], 00:18:59.010 "driver_specific": { 00:18:59.010 "passthru": { 00:18:59.010 "name": "pt3", 00:18:59.010 "base_bdev_name": "malloc3" 00:18:59.010 } 00:18:59.010 } 00:18:59.010 }' 00:18:59.010 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.268 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.268 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:59.268 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.268 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.268 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:59.268 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.268 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.268 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:59.268 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.268 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.525 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:59.525 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:59.525 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:59.525 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:59.783 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:59.783 "name": "pt4", 00:18:59.783 "aliases": [ 00:18:59.783 "00000000-0000-0000-0000-000000000004" 00:18:59.783 ], 00:18:59.783 "product_name": "passthru", 00:18:59.783 "block_size": 512, 00:18:59.783 "num_blocks": 65536, 00:18:59.783 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:59.783 "assigned_rate_limits": { 00:18:59.783 "rw_ios_per_sec": 0, 00:18:59.783 "rw_mbytes_per_sec": 0, 00:18:59.783 "r_mbytes_per_sec": 0, 00:18:59.783 "w_mbytes_per_sec": 0 00:18:59.783 }, 00:18:59.783 "claimed": true, 00:18:59.783 "claim_type": "exclusive_write", 00:18:59.783 "zoned": false, 00:18:59.783 "supported_io_types": { 00:18:59.783 "read": true, 00:18:59.783 "write": true, 00:18:59.783 "unmap": true, 00:18:59.783 "flush": true, 00:18:59.783 "reset": true, 00:18:59.783 "nvme_admin": false, 00:18:59.783 "nvme_io": false, 00:18:59.783 "nvme_io_md": false, 00:18:59.783 "write_zeroes": true, 00:18:59.783 "zcopy": true, 00:18:59.783 "get_zone_info": false, 00:18:59.783 "zone_management": false, 00:18:59.783 "zone_append": false, 00:18:59.783 "compare": false, 00:18:59.783 "compare_and_write": false, 00:18:59.783 "abort": true, 00:18:59.783 "seek_hole": false, 00:18:59.783 "seek_data": false, 00:18:59.783 "copy": true, 00:18:59.783 "nvme_iov_md": false 00:18:59.783 }, 00:18:59.783 "memory_domains": [ 00:18:59.783 { 00:18:59.783 "dma_device_id": "system", 00:18:59.783 "dma_device_type": 1 00:18:59.783 }, 00:18:59.783 { 00:18:59.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.783 "dma_device_type": 2 00:18:59.783 } 00:18:59.783 ], 00:18:59.783 "driver_specific": { 00:18:59.783 "passthru": { 00:18:59.783 "name": "pt4", 00:18:59.783 "base_bdev_name": "malloc4" 00:18:59.783 } 00:18:59.783 } 00:18:59.783 }' 00:18:59.783 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.783 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.783 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:59.783 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.783 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.783 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:59.783 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.783 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.783 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:59.783 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.040 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.040 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:00.040 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:00.040 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:00.298 [2024-07-25 10:34:03.792194] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.298 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=9c0922ba-02aa-4540-a68b-129c17c84afa 00:19:00.298 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 9c0922ba-02aa-4540-a68b-129c17c84afa ']' 00:19:00.298 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:00.555 [2024-07-25 10:34:04.032565] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:00.555 [2024-07-25 10:34:04.032596] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.555 [2024-07-25 10:34:04.032678] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.555 [2024-07-25 10:34:04.032750] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.555 [2024-07-25 10:34:04.032763] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2291980 name raid_bdev1, state offline 00:19:00.555 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.555 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:00.813 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:00.813 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:00.813 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:00.813 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:01.070 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:01.070 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:01.328 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:01.328 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:01.586 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:01.586 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:01.844 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:01.844 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:02.126 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:02.126 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:02.126 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:02.126 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:02.126 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:02.126 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.126 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:02.126 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.126 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:02.126 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.126 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:02.126 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:02.127 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:02.385 [2024-07-25 10:34:05.841378] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:02.385 [2024-07-25 10:34:05.842835] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:02.385 [2024-07-25 10:34:05.842889] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:02.385 [2024-07-25 10:34:05.842938] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:02.385 [2024-07-25 10:34:05.843006] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:02.385 [2024-07-25 10:34:05.843064] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:02.385 [2024-07-25 10:34:05.843097] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:02.385 [2024-07-25 10:34:05.843143] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:02.385 [2024-07-25 10:34:05.843167] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.385 [2024-07-25 10:34:05.843179] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2430df0 name raid_bdev1, state configuring 00:19:02.385 request: 00:19:02.385 { 00:19:02.385 "name": "raid_bdev1", 00:19:02.385 "raid_level": "concat", 00:19:02.385 "base_bdevs": [ 00:19:02.385 "malloc1", 00:19:02.385 "malloc2", 00:19:02.385 "malloc3", 00:19:02.385 "malloc4" 00:19:02.385 ], 00:19:02.385 "strip_size_kb": 64, 00:19:02.385 "superblock": false, 00:19:02.385 "method": "bdev_raid_create", 00:19:02.385 "req_id": 1 00:19:02.385 } 00:19:02.385 Got JSON-RPC error response 00:19:02.385 response: 00:19:02.385 { 00:19:02.385 "code": -17, 00:19:02.385 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:02.385 } 00:19:02.385 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:02.385 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:02.385 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:02.385 10:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:02.385 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.385 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:19:02.644 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:19:02.644 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:19:02.644 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:02.902 [2024-07-25 10:34:06.386746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:02.902 [2024-07-25 10:34:06.386823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.902 [2024-07-25 10:34:06.386850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22994e0 00:19:02.902 [2024-07-25 10:34:06.386866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.902 [2024-07-25 10:34:06.388646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.902 [2024-07-25 10:34:06.388675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:02.902 [2024-07-25 10:34:06.388771] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:02.902 [2024-07-25 10:34:06.388811] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.902 pt1 00:19:02.902 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:02.902 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:02.902 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:02.902 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:02.902 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:02.902 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:02.902 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.902 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.902 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.902 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.902 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.902 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.160 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.160 "name": "raid_bdev1", 00:19:03.160 "uuid": "9c0922ba-02aa-4540-a68b-129c17c84afa", 00:19:03.160 "strip_size_kb": 64, 00:19:03.160 "state": "configuring", 00:19:03.160 "raid_level": "concat", 00:19:03.160 "superblock": true, 00:19:03.160 "num_base_bdevs": 4, 00:19:03.160 "num_base_bdevs_discovered": 1, 00:19:03.160 "num_base_bdevs_operational": 4, 00:19:03.160 "base_bdevs_list": [ 00:19:03.160 { 00:19:03.160 "name": "pt1", 00:19:03.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.160 "is_configured": true, 00:19:03.160 "data_offset": 2048, 00:19:03.160 "data_size": 63488 00:19:03.160 }, 00:19:03.160 { 00:19:03.160 "name": null, 00:19:03.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.160 "is_configured": false, 00:19:03.160 "data_offset": 2048, 00:19:03.160 "data_size": 63488 00:19:03.160 }, 00:19:03.160 { 00:19:03.160 "name": null, 00:19:03.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:03.160 "is_configured": false, 00:19:03.160 "data_offset": 2048, 00:19:03.160 "data_size": 63488 00:19:03.160 }, 00:19:03.160 { 00:19:03.160 "name": null, 00:19:03.160 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:03.160 "is_configured": false, 00:19:03.160 "data_offset": 2048, 00:19:03.160 "data_size": 63488 00:19:03.160 } 00:19:03.160 ] 00:19:03.160 }' 00:19:03.160 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.160 10:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.726 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:19:03.726 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:03.726 [2024-07-25 10:34:07.425503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:03.726 [2024-07-25 10:34:07.425583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.726 [2024-07-25 10:34:07.425607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24331b0 00:19:03.726 [2024-07-25 10:34:07.425631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.726 [2024-07-25 10:34:07.426121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.726 [2024-07-25 10:34:07.426148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:03.726 [2024-07-25 10:34:07.426237] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:03.726 [2024-07-25 10:34:07.426267] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:03.726 pt2 00:19:03.984 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:04.242 [2024-07-25 10:34:07.718279] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:04.242 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:04.242 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:04.242 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:04.242 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:04.242 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:04.242 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:04.242 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.242 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.242 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.242 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.242 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.242 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.501 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.501 "name": "raid_bdev1", 00:19:04.501 "uuid": "9c0922ba-02aa-4540-a68b-129c17c84afa", 00:19:04.501 "strip_size_kb": 64, 00:19:04.501 "state": "configuring", 00:19:04.501 "raid_level": "concat", 00:19:04.501 "superblock": true, 00:19:04.501 "num_base_bdevs": 4, 00:19:04.501 "num_base_bdevs_discovered": 1, 00:19:04.501 "num_base_bdevs_operational": 4, 00:19:04.501 "base_bdevs_list": [ 00:19:04.501 { 00:19:04.501 "name": "pt1", 00:19:04.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.501 "is_configured": true, 00:19:04.501 "data_offset": 2048, 00:19:04.501 "data_size": 63488 00:19:04.501 }, 00:19:04.501 { 00:19:04.501 "name": null, 00:19:04.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.501 "is_configured": false, 00:19:04.501 "data_offset": 2048, 00:19:04.501 "data_size": 63488 00:19:04.501 }, 00:19:04.501 { 00:19:04.501 "name": null, 00:19:04.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:04.501 "is_configured": false, 00:19:04.501 "data_offset": 2048, 00:19:04.501 "data_size": 63488 00:19:04.501 }, 00:19:04.501 { 00:19:04.501 "name": null, 00:19:04.501 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:04.501 "is_configured": false, 00:19:04.501 "data_offset": 2048, 00:19:04.501 "data_size": 63488 00:19:04.501 } 00:19:04.501 ] 00:19:04.501 }' 00:19:04.501 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.501 10:34:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.067 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:19:05.067 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:05.067 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:05.325 [2024-07-25 10:34:08.849322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:05.325 [2024-07-25 10:34:08.849408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.325 [2024-07-25 10:34:08.849432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2291fb0 00:19:05.325 [2024-07-25 10:34:08.849462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.325 [2024-07-25 10:34:08.849900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.325 [2024-07-25 10:34:08.849922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:05.325 [2024-07-25 10:34:08.850026] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:05.325 [2024-07-25 10:34:08.850053] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:05.325 pt2 00:19:05.325 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:05.325 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:05.325 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:05.583 [2024-07-25 10:34:09.134072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:05.583 [2024-07-25 10:34:09.134152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.583 [2024-07-25 10:34:09.134184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2292420 00:19:05.583 [2024-07-25 10:34:09.134201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.583 [2024-07-25 10:34:09.134660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.584 [2024-07-25 10:34:09.134688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:05.584 [2024-07-25 10:34:09.134785] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:05.584 [2024-07-25 10:34:09.134815] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:05.584 pt3 00:19:05.584 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:05.584 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:05.584 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:05.842 [2024-07-25 10:34:09.418808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:05.842 [2024-07-25 10:34:09.418869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.842 [2024-07-25 10:34:09.418897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2295560 00:19:05.842 [2024-07-25 10:34:09.418913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.842 [2024-07-25 10:34:09.419344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.842 [2024-07-25 10:34:09.419372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:05.842 [2024-07-25 10:34:09.419460] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:05.842 [2024-07-25 10:34:09.419490] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:05.842 [2024-07-25 10:34:09.419644] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x24334d0 00:19:05.842 [2024-07-25 10:34:09.419660] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:05.842 [2024-07-25 10:34:09.419832] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2295d30 00:19:05.842 [2024-07-25 10:34:09.419987] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24334d0 00:19:05.842 [2024-07-25 10:34:09.420003] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24334d0 00:19:05.842 [2024-07-25 10:34:09.420130] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.842 pt4 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.842 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.100 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.100 "name": "raid_bdev1", 00:19:06.100 "uuid": "9c0922ba-02aa-4540-a68b-129c17c84afa", 00:19:06.100 "strip_size_kb": 64, 00:19:06.100 "state": "online", 00:19:06.100 "raid_level": "concat", 00:19:06.100 "superblock": true, 00:19:06.100 "num_base_bdevs": 4, 00:19:06.100 "num_base_bdevs_discovered": 4, 00:19:06.100 "num_base_bdevs_operational": 4, 00:19:06.100 "base_bdevs_list": [ 00:19:06.100 { 00:19:06.100 "name": "pt1", 00:19:06.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.100 "is_configured": true, 00:19:06.100 "data_offset": 2048, 00:19:06.100 "data_size": 63488 00:19:06.100 }, 00:19:06.100 { 00:19:06.100 "name": "pt2", 00:19:06.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.100 "is_configured": true, 00:19:06.100 "data_offset": 2048, 00:19:06.100 "data_size": 63488 00:19:06.100 }, 00:19:06.100 { 00:19:06.100 "name": "pt3", 00:19:06.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:06.100 "is_configured": true, 00:19:06.100 "data_offset": 2048, 00:19:06.100 "data_size": 63488 00:19:06.100 }, 00:19:06.100 { 00:19:06.100 "name": "pt4", 00:19:06.100 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:06.100 "is_configured": true, 00:19:06.100 "data_offset": 2048, 00:19:06.100 "data_size": 63488 00:19:06.100 } 00:19:06.100 ] 00:19:06.100 }' 00:19:06.100 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.100 10:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.666 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:19:06.666 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:06.666 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:06.666 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:06.666 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:06.666 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:06.666 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:06.666 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:06.924 [2024-07-25 10:34:10.538075] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.924 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:06.924 "name": "raid_bdev1", 00:19:06.924 "aliases": [ 00:19:06.924 "9c0922ba-02aa-4540-a68b-129c17c84afa" 00:19:06.924 ], 00:19:06.924 "product_name": "Raid Volume", 00:19:06.924 "block_size": 512, 00:19:06.924 "num_blocks": 253952, 00:19:06.924 "uuid": "9c0922ba-02aa-4540-a68b-129c17c84afa", 00:19:06.924 "assigned_rate_limits": { 00:19:06.924 "rw_ios_per_sec": 0, 00:19:06.924 "rw_mbytes_per_sec": 0, 00:19:06.924 "r_mbytes_per_sec": 0, 00:19:06.924 "w_mbytes_per_sec": 0 00:19:06.924 }, 00:19:06.924 "claimed": false, 00:19:06.924 "zoned": false, 00:19:06.924 "supported_io_types": { 00:19:06.924 "read": true, 00:19:06.924 "write": true, 00:19:06.924 "unmap": true, 00:19:06.924 "flush": true, 00:19:06.924 "reset": true, 00:19:06.924 "nvme_admin": false, 00:19:06.924 "nvme_io": false, 00:19:06.924 "nvme_io_md": false, 00:19:06.924 "write_zeroes": true, 00:19:06.924 "zcopy": false, 00:19:06.924 "get_zone_info": false, 00:19:06.924 "zone_management": false, 00:19:06.924 "zone_append": false, 00:19:06.924 "compare": false, 00:19:06.924 "compare_and_write": false, 00:19:06.924 "abort": false, 00:19:06.924 "seek_hole": false, 00:19:06.924 "seek_data": false, 00:19:06.924 "copy": false, 00:19:06.924 "nvme_iov_md": false 00:19:06.924 }, 00:19:06.924 "memory_domains": [ 00:19:06.924 { 00:19:06.924 "dma_device_id": "system", 00:19:06.924 "dma_device_type": 1 00:19:06.924 }, 00:19:06.924 { 00:19:06.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.924 "dma_device_type": 2 00:19:06.924 }, 00:19:06.924 { 00:19:06.924 "dma_device_id": "system", 00:19:06.924 "dma_device_type": 1 00:19:06.924 }, 00:19:06.924 { 00:19:06.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.924 "dma_device_type": 2 00:19:06.924 }, 00:19:06.924 { 00:19:06.924 "dma_device_id": "system", 00:19:06.924 "dma_device_type": 1 00:19:06.924 }, 00:19:06.924 { 00:19:06.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.924 "dma_device_type": 2 00:19:06.924 }, 00:19:06.924 { 00:19:06.924 "dma_device_id": "system", 00:19:06.924 "dma_device_type": 1 00:19:06.924 }, 00:19:06.924 { 00:19:06.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.924 "dma_device_type": 2 00:19:06.924 } 00:19:06.924 ], 00:19:06.924 "driver_specific": { 00:19:06.924 "raid": { 00:19:06.924 "uuid": "9c0922ba-02aa-4540-a68b-129c17c84afa", 00:19:06.924 "strip_size_kb": 64, 00:19:06.924 "state": "online", 00:19:06.924 "raid_level": "concat", 00:19:06.924 "superblock": true, 00:19:06.924 "num_base_bdevs": 4, 00:19:06.924 "num_base_bdevs_discovered": 4, 00:19:06.924 "num_base_bdevs_operational": 4, 00:19:06.924 "base_bdevs_list": [ 00:19:06.924 { 00:19:06.924 "name": "pt1", 00:19:06.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.924 "is_configured": true, 00:19:06.924 "data_offset": 2048, 00:19:06.924 "data_size": 63488 00:19:06.924 }, 00:19:06.924 { 00:19:06.924 "name": "pt2", 00:19:06.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.924 "is_configured": true, 00:19:06.924 "data_offset": 2048, 00:19:06.924 "data_size": 63488 00:19:06.924 }, 00:19:06.924 { 00:19:06.924 "name": "pt3", 00:19:06.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:06.924 "is_configured": true, 00:19:06.924 "data_offset": 2048, 00:19:06.924 "data_size": 63488 00:19:06.924 }, 00:19:06.924 { 00:19:06.924 "name": "pt4", 00:19:06.924 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:06.924 "is_configured": true, 00:19:06.924 "data_offset": 2048, 00:19:06.924 "data_size": 63488 00:19:06.924 } 00:19:06.924 ] 00:19:06.924 } 00:19:06.924 } 00:19:06.924 }' 00:19:06.924 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:06.924 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:06.924 pt2 00:19:06.924 pt3 00:19:06.924 pt4' 00:19:06.925 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:06.925 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:06.925 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:07.183 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:07.183 "name": "pt1", 00:19:07.183 "aliases": [ 00:19:07.183 "00000000-0000-0000-0000-000000000001" 00:19:07.183 ], 00:19:07.183 "product_name": "passthru", 00:19:07.183 "block_size": 512, 00:19:07.183 "num_blocks": 65536, 00:19:07.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:07.183 "assigned_rate_limits": { 00:19:07.183 "rw_ios_per_sec": 0, 00:19:07.183 "rw_mbytes_per_sec": 0, 00:19:07.183 "r_mbytes_per_sec": 0, 00:19:07.183 "w_mbytes_per_sec": 0 00:19:07.183 }, 00:19:07.183 "claimed": true, 00:19:07.183 "claim_type": "exclusive_write", 00:19:07.183 "zoned": false, 00:19:07.183 "supported_io_types": { 00:19:07.183 "read": true, 00:19:07.183 "write": true, 00:19:07.183 "unmap": true, 00:19:07.183 "flush": true, 00:19:07.183 "reset": true, 00:19:07.183 "nvme_admin": false, 00:19:07.183 "nvme_io": false, 00:19:07.183 "nvme_io_md": false, 00:19:07.183 "write_zeroes": true, 00:19:07.183 "zcopy": true, 00:19:07.183 "get_zone_info": false, 00:19:07.183 "zone_management": false, 00:19:07.183 "zone_append": false, 00:19:07.183 "compare": false, 00:19:07.183 "compare_and_write": false, 00:19:07.183 "abort": true, 00:19:07.183 "seek_hole": false, 00:19:07.183 "seek_data": false, 00:19:07.183 "copy": true, 00:19:07.183 "nvme_iov_md": false 00:19:07.183 }, 00:19:07.183 "memory_domains": [ 00:19:07.183 { 00:19:07.183 "dma_device_id": "system", 00:19:07.183 "dma_device_type": 1 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.183 "dma_device_type": 2 00:19:07.183 } 00:19:07.183 ], 00:19:07.183 "driver_specific": { 00:19:07.183 "passthru": { 00:19:07.183 "name": "pt1", 00:19:07.183 "base_bdev_name": "malloc1" 00:19:07.183 } 00:19:07.183 } 00:19:07.183 }' 00:19:07.183 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.441 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.441 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:07.441 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.441 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.441 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:07.441 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.441 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.441 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:07.441 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.441 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.441 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:07.441 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:07.699 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:07.699 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:07.699 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:07.699 "name": "pt2", 00:19:07.699 "aliases": [ 00:19:07.699 "00000000-0000-0000-0000-000000000002" 00:19:07.699 ], 00:19:07.699 "product_name": "passthru", 00:19:07.699 "block_size": 512, 00:19:07.699 "num_blocks": 65536, 00:19:07.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.699 "assigned_rate_limits": { 00:19:07.699 "rw_ios_per_sec": 0, 00:19:07.699 "rw_mbytes_per_sec": 0, 00:19:07.699 "r_mbytes_per_sec": 0, 00:19:07.699 "w_mbytes_per_sec": 0 00:19:07.699 }, 00:19:07.699 "claimed": true, 00:19:07.699 "claim_type": "exclusive_write", 00:19:07.699 "zoned": false, 00:19:07.699 "supported_io_types": { 00:19:07.699 "read": true, 00:19:07.699 "write": true, 00:19:07.699 "unmap": true, 00:19:07.699 "flush": true, 00:19:07.699 "reset": true, 00:19:07.699 "nvme_admin": false, 00:19:07.699 "nvme_io": false, 00:19:07.699 "nvme_io_md": false, 00:19:07.699 "write_zeroes": true, 00:19:07.699 "zcopy": true, 00:19:07.699 "get_zone_info": false, 00:19:07.699 "zone_management": false, 00:19:07.699 "zone_append": false, 00:19:07.699 "compare": false, 00:19:07.699 "compare_and_write": false, 00:19:07.699 "abort": true, 00:19:07.699 "seek_hole": false, 00:19:07.699 "seek_data": false, 00:19:07.699 "copy": true, 00:19:07.699 "nvme_iov_md": false 00:19:07.699 }, 00:19:07.699 "memory_domains": [ 00:19:07.699 { 00:19:07.699 "dma_device_id": "system", 00:19:07.699 "dma_device_type": 1 00:19:07.699 }, 00:19:07.699 { 00:19:07.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.699 "dma_device_type": 2 00:19:07.699 } 00:19:07.699 ], 00:19:07.699 "driver_specific": { 00:19:07.699 "passthru": { 00:19:07.699 "name": "pt2", 00:19:07.699 "base_bdev_name": "malloc2" 00:19:07.699 } 00:19:07.699 } 00:19:07.699 }' 00:19:07.699 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.957 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.957 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:07.957 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.957 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.957 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:07.957 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.957 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.957 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:07.957 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.214 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.214 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.214 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:08.214 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:08.214 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:08.472 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:08.472 "name": "pt3", 00:19:08.472 "aliases": [ 00:19:08.472 "00000000-0000-0000-0000-000000000003" 00:19:08.472 ], 00:19:08.472 "product_name": "passthru", 00:19:08.472 "block_size": 512, 00:19:08.472 "num_blocks": 65536, 00:19:08.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:08.472 "assigned_rate_limits": { 00:19:08.472 "rw_ios_per_sec": 0, 00:19:08.472 "rw_mbytes_per_sec": 0, 00:19:08.472 "r_mbytes_per_sec": 0, 00:19:08.472 "w_mbytes_per_sec": 0 00:19:08.472 }, 00:19:08.472 "claimed": true, 00:19:08.472 "claim_type": "exclusive_write", 00:19:08.472 "zoned": false, 00:19:08.472 "supported_io_types": { 00:19:08.472 "read": true, 00:19:08.472 "write": true, 00:19:08.472 "unmap": true, 00:19:08.472 "flush": true, 00:19:08.472 "reset": true, 00:19:08.472 "nvme_admin": false, 00:19:08.472 "nvme_io": false, 00:19:08.472 "nvme_io_md": false, 00:19:08.472 "write_zeroes": true, 00:19:08.472 "zcopy": true, 00:19:08.472 "get_zone_info": false, 00:19:08.472 "zone_management": false, 00:19:08.472 "zone_append": false, 00:19:08.472 "compare": false, 00:19:08.472 "compare_and_write": false, 00:19:08.472 "abort": true, 00:19:08.472 "seek_hole": false, 00:19:08.472 "seek_data": false, 00:19:08.472 "copy": true, 00:19:08.472 "nvme_iov_md": false 00:19:08.472 }, 00:19:08.472 "memory_domains": [ 00:19:08.472 { 00:19:08.472 "dma_device_id": "system", 00:19:08.472 "dma_device_type": 1 00:19:08.472 }, 00:19:08.472 { 00:19:08.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.472 "dma_device_type": 2 00:19:08.472 } 00:19:08.472 ], 00:19:08.472 "driver_specific": { 00:19:08.472 "passthru": { 00:19:08.472 "name": "pt3", 00:19:08.472 "base_bdev_name": "malloc3" 00:19:08.472 } 00:19:08.472 } 00:19:08.472 }' 00:19:08.472 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.472 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.472 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:08.472 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.472 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.472 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:08.472 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.472 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.730 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:08.730 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.730 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.730 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.730 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:08.730 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:19:08.730 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:08.988 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:08.988 "name": "pt4", 00:19:08.988 "aliases": [ 00:19:08.988 "00000000-0000-0000-0000-000000000004" 00:19:08.988 ], 00:19:08.988 "product_name": "passthru", 00:19:08.988 "block_size": 512, 00:19:08.988 "num_blocks": 65536, 00:19:08.988 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:08.988 "assigned_rate_limits": { 00:19:08.988 "rw_ios_per_sec": 0, 00:19:08.988 "rw_mbytes_per_sec": 0, 00:19:08.988 "r_mbytes_per_sec": 0, 00:19:08.988 "w_mbytes_per_sec": 0 00:19:08.988 }, 00:19:08.988 "claimed": true, 00:19:08.988 "claim_type": "exclusive_write", 00:19:08.988 "zoned": false, 00:19:08.988 "supported_io_types": { 00:19:08.988 "read": true, 00:19:08.988 "write": true, 00:19:08.988 "unmap": true, 00:19:08.988 "flush": true, 00:19:08.988 "reset": true, 00:19:08.988 "nvme_admin": false, 00:19:08.988 "nvme_io": false, 00:19:08.988 "nvme_io_md": false, 00:19:08.988 "write_zeroes": true, 00:19:08.988 "zcopy": true, 00:19:08.988 "get_zone_info": false, 00:19:08.988 "zone_management": false, 00:19:08.988 "zone_append": false, 00:19:08.988 "compare": false, 00:19:08.988 "compare_and_write": false, 00:19:08.988 "abort": true, 00:19:08.988 "seek_hole": false, 00:19:08.989 "seek_data": false, 00:19:08.989 "copy": true, 00:19:08.989 "nvme_iov_md": false 00:19:08.989 }, 00:19:08.989 "memory_domains": [ 00:19:08.989 { 00:19:08.989 "dma_device_id": "system", 00:19:08.989 "dma_device_type": 1 00:19:08.989 }, 00:19:08.989 { 00:19:08.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.989 "dma_device_type": 2 00:19:08.989 } 00:19:08.989 ], 00:19:08.989 "driver_specific": { 00:19:08.989 "passthru": { 00:19:08.989 "name": "pt4", 00:19:08.989 "base_bdev_name": "malloc4" 00:19:08.989 } 00:19:08.989 } 00:19:08.989 }' 00:19:08.989 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.989 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.989 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:08.989 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.989 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.989 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:08.989 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.989 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:09.246 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:09.247 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:09.247 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:09.247 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:09.247 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:09.247 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:19:09.505 [2024-07-25 10:34:13.040692] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 9c0922ba-02aa-4540-a68b-129c17c84afa '!=' 9c0922ba-02aa-4540-a68b-129c17c84afa ']' 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2411344 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2411344 ']' 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2411344 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2411344 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2411344' 00:19:09.505 killing process with pid 2411344 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2411344 00:19:09.505 [2024-07-25 10:34:13.087928] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:09.505 10:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2411344 00:19:09.505 [2024-07-25 10:34:13.088008] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.505 [2024-07-25 10:34:13.088077] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.505 [2024-07-25 10:34:13.088114] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24334d0 name raid_bdev1, state offline 00:19:09.505 [2024-07-25 10:34:13.133857] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:09.763 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:19:09.763 00:19:09.763 real 0m16.460s 00:19:09.763 user 0m30.217s 00:19:09.763 sys 0m2.256s 00:19:09.763 10:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:09.763 10:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.763 ************************************ 00:19:09.763 END TEST raid_superblock_test 00:19:09.763 ************************************ 00:19:09.763 10:34:13 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:19:09.763 10:34:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:09.763 10:34:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:09.763 10:34:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.763 ************************************ 00:19:09.763 START TEST raid_read_error_test 00:19:09.763 ************************************ 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.lDOhL2SyHt 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2413662 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2413662 /var/tmp/spdk-raid.sock 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2413662 ']' 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:09.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:09.763 10:34:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.021 [2024-07-25 10:34:13.506032] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:19:10.021 [2024-07-25 10:34:13.506132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413662 ] 00:19:10.021 [2024-07-25 10:34:13.585434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.021 [2024-07-25 10:34:13.704393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.278 [2024-07-25 10:34:13.779533] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.278 [2024-07-25 10:34:13.779581] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.844 10:34:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.844 10:34:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:10.844 10:34:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:10.844 10:34:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:11.102 BaseBdev1_malloc 00:19:11.102 10:34:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:11.360 true 00:19:11.360 10:34:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:11.618 [2024-07-25 10:34:15.197617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:11.618 [2024-07-25 10:34:15.197675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.618 [2024-07-25 10:34:15.197697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb67250 00:19:11.618 [2024-07-25 10:34:15.197713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.618 [2024-07-25 10:34:15.199295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.618 [2024-07-25 10:34:15.199322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:11.618 BaseBdev1 00:19:11.618 10:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:11.618 10:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:11.876 BaseBdev2_malloc 00:19:11.876 10:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:12.134 true 00:19:12.134 10:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:12.391 [2024-07-25 10:34:16.071998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:12.391 [2024-07-25 10:34:16.072067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.391 [2024-07-25 10:34:16.072097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb56650 00:19:12.391 [2024-07-25 10:34:16.072123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.391 [2024-07-25 10:34:16.073958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.391 [2024-07-25 10:34:16.073987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:12.391 BaseBdev2 00:19:12.391 10:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:12.392 10:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:12.649 BaseBdev3_malloc 00:19:12.649 10:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:12.905 true 00:19:13.163 10:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:13.163 [2024-07-25 10:34:16.845075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:13.163 [2024-07-25 10:34:16.845143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.163 [2024-07-25 10:34:16.845172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb4c5d0 00:19:13.163 [2024-07-25 10:34:16.845188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.163 [2024-07-25 10:34:16.846932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.163 [2024-07-25 10:34:16.846960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:13.163 BaseBdev3 00:19:13.163 10:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:13.163 10:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:13.421 BaseBdev4_malloc 00:19:13.421 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:19:13.678 true 00:19:13.678 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:13.935 [2024-07-25 10:34:17.595195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:13.935 [2024-07-25 10:34:17.595269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.935 [2024-07-25 10:34:17.595294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9acd10 00:19:13.935 [2024-07-25 10:34:17.595309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.935 [2024-07-25 10:34:17.596887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.935 [2024-07-25 10:34:17.596912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:13.935 BaseBdev4 00:19:13.935 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:19:14.192 [2024-07-25 10:34:17.839910] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.192 [2024-07-25 10:34:17.841296] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.192 [2024-07-25 10:34:17.841365] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:14.192 [2024-07-25 10:34:17.841466] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:14.193 [2024-07-25 10:34:17.841721] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x9ae8c0 00:19:14.193 [2024-07-25 10:34:17.841736] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:14.193 [2024-07-25 10:34:17.841955] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9af240 00:19:14.193 [2024-07-25 10:34:17.842151] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9ae8c0 00:19:14.193 [2024-07-25 10:34:17.842190] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9ae8c0 00:19:14.193 [2024-07-25 10:34:17.842321] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.193 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:14.193 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:14.193 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:14.193 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:14.193 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:14.193 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:14.193 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:14.193 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:14.193 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:14.193 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:14.193 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.193 10:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.449 10:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.449 "name": "raid_bdev1", 00:19:14.449 "uuid": "fbcbdc8b-d0b9-477e-823a-8505ceb83196", 00:19:14.449 "strip_size_kb": 64, 00:19:14.449 "state": "online", 00:19:14.449 "raid_level": "concat", 00:19:14.449 "superblock": true, 00:19:14.449 "num_base_bdevs": 4, 00:19:14.449 "num_base_bdevs_discovered": 4, 00:19:14.449 "num_base_bdevs_operational": 4, 00:19:14.449 "base_bdevs_list": [ 00:19:14.449 { 00:19:14.449 "name": "BaseBdev1", 00:19:14.449 "uuid": "986678e9-fa96-55e0-b5c8-57c764c2f421", 00:19:14.449 "is_configured": true, 00:19:14.449 "data_offset": 2048, 00:19:14.449 "data_size": 63488 00:19:14.449 }, 00:19:14.449 { 00:19:14.449 "name": "BaseBdev2", 00:19:14.449 "uuid": "8693d13c-fc92-50ba-9741-aa58a0e8a63a", 00:19:14.449 "is_configured": true, 00:19:14.449 "data_offset": 2048, 00:19:14.449 "data_size": 63488 00:19:14.449 }, 00:19:14.449 { 00:19:14.449 "name": "BaseBdev3", 00:19:14.449 "uuid": "5c5c7aad-b4fb-55ca-ad1a-e6d8754e9805", 00:19:14.449 "is_configured": true, 00:19:14.449 "data_offset": 2048, 00:19:14.449 "data_size": 63488 00:19:14.449 }, 00:19:14.449 { 00:19:14.449 "name": "BaseBdev4", 00:19:14.449 "uuid": "f29871c0-e0c1-5732-872d-2f45fcdf0293", 00:19:14.449 "is_configured": true, 00:19:14.449 "data_offset": 2048, 00:19:14.449 "data_size": 63488 00:19:14.449 } 00:19:14.449 ] 00:19:14.449 }' 00:19:14.449 10:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.449 10:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.013 10:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:15.013 10:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:15.270 [2024-07-25 10:34:18.762706] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9b1b50 00:19:16.202 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.460 10:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.718 10:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.718 "name": "raid_bdev1", 00:19:16.718 "uuid": "fbcbdc8b-d0b9-477e-823a-8505ceb83196", 00:19:16.718 "strip_size_kb": 64, 00:19:16.718 "state": "online", 00:19:16.718 "raid_level": "concat", 00:19:16.718 "superblock": true, 00:19:16.718 "num_base_bdevs": 4, 00:19:16.718 "num_base_bdevs_discovered": 4, 00:19:16.718 "num_base_bdevs_operational": 4, 00:19:16.718 "base_bdevs_list": [ 00:19:16.718 { 00:19:16.718 "name": "BaseBdev1", 00:19:16.718 "uuid": "986678e9-fa96-55e0-b5c8-57c764c2f421", 00:19:16.718 "is_configured": true, 00:19:16.718 "data_offset": 2048, 00:19:16.718 "data_size": 63488 00:19:16.718 }, 00:19:16.718 { 00:19:16.718 "name": "BaseBdev2", 00:19:16.718 "uuid": "8693d13c-fc92-50ba-9741-aa58a0e8a63a", 00:19:16.718 "is_configured": true, 00:19:16.718 "data_offset": 2048, 00:19:16.718 "data_size": 63488 00:19:16.718 }, 00:19:16.718 { 00:19:16.718 "name": "BaseBdev3", 00:19:16.718 "uuid": "5c5c7aad-b4fb-55ca-ad1a-e6d8754e9805", 00:19:16.718 "is_configured": true, 00:19:16.718 "data_offset": 2048, 00:19:16.718 "data_size": 63488 00:19:16.718 }, 00:19:16.718 { 00:19:16.718 "name": "BaseBdev4", 00:19:16.718 "uuid": "f29871c0-e0c1-5732-872d-2f45fcdf0293", 00:19:16.718 "is_configured": true, 00:19:16.718 "data_offset": 2048, 00:19:16.718 "data_size": 63488 00:19:16.718 } 00:19:16.718 ] 00:19:16.718 }' 00:19:16.718 10:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.718 10:34:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.284 10:34:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:17.542 [2024-07-25 10:34:21.055917] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:17.542 [2024-07-25 10:34:21.055960] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.542 [2024-07-25 10:34:21.058982] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.542 [2024-07-25 10:34:21.059031] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.542 [2024-07-25 10:34:21.059075] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:17.542 [2024-07-25 10:34:21.059099] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9ae8c0 name raid_bdev1, state offline 00:19:17.542 0 00:19:17.542 10:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2413662 00:19:17.542 10:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2413662 ']' 00:19:17.542 10:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2413662 00:19:17.542 10:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:19:17.542 10:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:17.542 10:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2413662 00:19:17.542 10:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:17.542 10:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:17.542 10:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2413662' 00:19:17.542 killing process with pid 2413662 00:19:17.542 10:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2413662 00:19:17.542 [2024-07-25 10:34:21.105397] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:17.542 10:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2413662 00:19:17.542 [2024-07-25 10:34:21.148801] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:17.799 10:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.lDOhL2SyHt 00:19:17.799 10:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:17.799 10:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:17.799 10:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:19:17.799 10:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:17.799 10:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:17.799 10:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:17.799 10:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:19:17.799 00:19:17.799 real 0m8.011s 00:19:17.799 user 0m13.048s 00:19:17.799 sys 0m1.114s 00:19:17.799 10:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:17.799 10:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.799 ************************************ 00:19:17.799 END TEST raid_read_error_test 00:19:17.799 ************************************ 00:19:17.799 10:34:21 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:19:17.800 10:34:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:17.800 10:34:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:17.800 10:34:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:17.800 ************************************ 00:19:17.800 START TEST raid_write_error_test 00:19:17.800 ************************************ 00:19:17.800 10:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:19:17.800 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:17.800 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:19:17.800 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Tj61F82fJi 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2414691 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2414691 /var/tmp/spdk-raid.sock 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2414691 ']' 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:18.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.058 10:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.058 [2024-07-25 10:34:21.569660] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:19:18.058 [2024-07-25 10:34:21.569728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2414691 ] 00:19:18.058 [2024-07-25 10:34:21.642515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.058 [2024-07-25 10:34:21.748678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.316 [2024-07-25 10:34:21.817600] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:18.316 [2024-07-25 10:34:21.817637] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:18.911 10:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.911 10:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:18.911 10:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:18.911 10:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:19.169 BaseBdev1_malloc 00:19:19.169 10:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:19.427 true 00:19:19.427 10:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:19.685 [2024-07-25 10:34:23.350120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:19.685 [2024-07-25 10:34:23.350195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.685 [2024-07-25 10:34:23.350224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1891250 00:19:19.685 [2024-07-25 10:34:23.350239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.685 [2024-07-25 10:34:23.352213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.685 [2024-07-25 10:34:23.352240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:19.685 BaseBdev1 00:19:19.685 10:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:19.685 10:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:19.941 BaseBdev2_malloc 00:19:19.941 10:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:20.198 true 00:19:20.198 10:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:20.456 [2024-07-25 10:34:24.087517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:20.456 [2024-07-25 10:34:24.087572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.456 [2024-07-25 10:34:24.087594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1880650 00:19:20.456 [2024-07-25 10:34:24.087609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.456 [2024-07-25 10:34:24.089059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.456 [2024-07-25 10:34:24.089087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:20.456 BaseBdev2 00:19:20.456 10:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:20.456 10:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:20.714 BaseBdev3_malloc 00:19:20.714 10:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:20.972 true 00:19:20.972 10:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:21.229 [2024-07-25 10:34:24.845442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:21.229 [2024-07-25 10:34:24.845515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.230 [2024-07-25 10:34:24.845539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18765d0 00:19:21.230 [2024-07-25 10:34:24.845552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.230 [2024-07-25 10:34:24.847037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.230 [2024-07-25 10:34:24.847060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:21.230 BaseBdev3 00:19:21.230 10:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:21.230 10:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:21.487 BaseBdev4_malloc 00:19:21.487 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:19:21.744 true 00:19:21.745 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:22.002 [2024-07-25 10:34:25.589672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:22.002 [2024-07-25 10:34:25.589723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.002 [2024-07-25 10:34:25.589747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16d6d10 00:19:22.002 [2024-07-25 10:34:25.589762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.002 [2024-07-25 10:34:25.591398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.002 [2024-07-25 10:34:25.591436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:22.002 BaseBdev4 00:19:22.002 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:19:22.260 [2024-07-25 10:34:25.834377] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:22.260 [2024-07-25 10:34:25.835759] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:22.260 [2024-07-25 10:34:25.835839] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:22.260 [2024-07-25 10:34:25.835925] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:22.260 [2024-07-25 10:34:25.836202] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x16d88c0 00:19:22.260 [2024-07-25 10:34:25.836220] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:22.260 [2024-07-25 10:34:25.836438] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16d9240 00:19:22.260 [2024-07-25 10:34:25.836627] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16d88c0 00:19:22.260 [2024-07-25 10:34:25.836644] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x16d88c0 00:19:22.260 [2024-07-25 10:34:25.836777] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.260 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:22.260 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:22.260 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:22.260 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:22.260 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:22.260 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:22.260 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.260 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.260 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.260 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.260 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.260 10:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.518 10:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.518 "name": "raid_bdev1", 00:19:22.518 "uuid": "bfee920c-43b1-4f2e-8c23-cf8b984975fa", 00:19:22.518 "strip_size_kb": 64, 00:19:22.518 "state": "online", 00:19:22.518 "raid_level": "concat", 00:19:22.518 "superblock": true, 00:19:22.518 "num_base_bdevs": 4, 00:19:22.518 "num_base_bdevs_discovered": 4, 00:19:22.518 "num_base_bdevs_operational": 4, 00:19:22.518 "base_bdevs_list": [ 00:19:22.518 { 00:19:22.518 "name": "BaseBdev1", 00:19:22.518 "uuid": "fc1d4252-2b89-5217-8f80-ed398a8398ed", 00:19:22.518 "is_configured": true, 00:19:22.518 "data_offset": 2048, 00:19:22.519 "data_size": 63488 00:19:22.519 }, 00:19:22.519 { 00:19:22.519 "name": "BaseBdev2", 00:19:22.519 "uuid": "ed2ccf75-9791-5166-9a0f-320d8b19712d", 00:19:22.519 "is_configured": true, 00:19:22.519 "data_offset": 2048, 00:19:22.519 "data_size": 63488 00:19:22.519 }, 00:19:22.519 { 00:19:22.519 "name": "BaseBdev3", 00:19:22.519 "uuid": "2c0d5473-f7b0-5f66-814d-419eeaf244aa", 00:19:22.519 "is_configured": true, 00:19:22.519 "data_offset": 2048, 00:19:22.519 "data_size": 63488 00:19:22.519 }, 00:19:22.519 { 00:19:22.519 "name": "BaseBdev4", 00:19:22.519 "uuid": "92037634-5f66-5413-805c-f1a461d0ac55", 00:19:22.519 "is_configured": true, 00:19:22.519 "data_offset": 2048, 00:19:22.519 "data_size": 63488 00:19:22.519 } 00:19:22.519 ] 00:19:22.519 }' 00:19:22.519 10:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.519 10:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.084 10:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:23.084 10:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:23.342 [2024-07-25 10:34:26.793352] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16dbb50 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.277 10:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.843 10:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:24.843 "name": "raid_bdev1", 00:19:24.843 "uuid": "bfee920c-43b1-4f2e-8c23-cf8b984975fa", 00:19:24.843 "strip_size_kb": 64, 00:19:24.843 "state": "online", 00:19:24.843 "raid_level": "concat", 00:19:24.843 "superblock": true, 00:19:24.843 "num_base_bdevs": 4, 00:19:24.843 "num_base_bdevs_discovered": 4, 00:19:24.843 "num_base_bdevs_operational": 4, 00:19:24.843 "base_bdevs_list": [ 00:19:24.843 { 00:19:24.843 "name": "BaseBdev1", 00:19:24.843 "uuid": "fc1d4252-2b89-5217-8f80-ed398a8398ed", 00:19:24.843 "is_configured": true, 00:19:24.843 "data_offset": 2048, 00:19:24.843 "data_size": 63488 00:19:24.843 }, 00:19:24.843 { 00:19:24.843 "name": "BaseBdev2", 00:19:24.843 "uuid": "ed2ccf75-9791-5166-9a0f-320d8b19712d", 00:19:24.843 "is_configured": true, 00:19:24.843 "data_offset": 2048, 00:19:24.843 "data_size": 63488 00:19:24.843 }, 00:19:24.843 { 00:19:24.843 "name": "BaseBdev3", 00:19:24.843 "uuid": "2c0d5473-f7b0-5f66-814d-419eeaf244aa", 00:19:24.843 "is_configured": true, 00:19:24.843 "data_offset": 2048, 00:19:24.843 "data_size": 63488 00:19:24.843 }, 00:19:24.843 { 00:19:24.843 "name": "BaseBdev4", 00:19:24.843 "uuid": "92037634-5f66-5413-805c-f1a461d0ac55", 00:19:24.843 "is_configured": true, 00:19:24.843 "data_offset": 2048, 00:19:24.843 "data_size": 63488 00:19:24.843 } 00:19:24.843 ] 00:19:24.843 }' 00:19:24.843 10:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:24.843 10:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.102 10:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:25.360 [2024-07-25 10:34:29.037043] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:25.360 [2024-07-25 10:34:29.037122] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.360 [2024-07-25 10:34:29.040126] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.360 [2024-07-25 10:34:29.040174] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.360 [2024-07-25 10:34:29.040217] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.360 [2024-07-25 10:34:29.040231] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16d88c0 name raid_bdev1, state offline 00:19:25.360 0 00:19:25.360 10:34:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2414691 00:19:25.360 10:34:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2414691 ']' 00:19:25.360 10:34:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2414691 00:19:25.360 10:34:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:19:25.360 10:34:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.360 10:34:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2414691 00:19:25.618 10:34:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:25.618 10:34:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:25.618 10:34:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2414691' 00:19:25.618 killing process with pid 2414691 00:19:25.618 10:34:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2414691 00:19:25.618 [2024-07-25 10:34:29.087014] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.618 10:34:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2414691 00:19:25.618 [2024-07-25 10:34:29.131533] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.875 10:34:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Tj61F82fJi 00:19:25.875 10:34:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:25.875 10:34:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:25.875 10:34:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:19:25.875 10:34:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:25.875 10:34:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:25.875 10:34:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:25.875 10:34:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:19:25.875 00:19:25.875 real 0m7.926s 00:19:25.875 user 0m12.941s 00:19:25.875 sys 0m1.077s 00:19:25.875 10:34:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:25.875 10:34:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.875 ************************************ 00:19:25.875 END TEST raid_write_error_test 00:19:25.875 ************************************ 00:19:25.875 10:34:29 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:19:25.875 10:34:29 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:19:25.875 10:34:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:25.875 10:34:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:25.875 10:34:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:25.875 ************************************ 00:19:25.875 START TEST raid_state_function_test 00:19:25.875 ************************************ 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2415722 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2415722' 00:19:25.876 Process raid pid: 2415722 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2415722 /var/tmp/spdk-raid.sock 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2415722 ']' 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:25.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.876 10:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.876 [2024-07-25 10:34:29.542297] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:19:25.876 [2024-07-25 10:34:29.542364] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.134 [2024-07-25 10:34:29.617202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.134 [2024-07-25 10:34:29.724131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.134 [2024-07-25 10:34:29.794241] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.134 [2024-07-25 10:34:29.794282] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.068 10:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:27.068 10:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:19:27.068 10:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:27.325 [2024-07-25 10:34:30.793488] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:27.325 [2024-07-25 10:34:30.793537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:27.325 [2024-07-25 10:34:30.793550] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:27.325 [2024-07-25 10:34:30.793571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:27.325 [2024-07-25 10:34:30.793581] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:27.325 [2024-07-25 10:34:30.793594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:27.325 [2024-07-25 10:34:30.793603] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:27.325 [2024-07-25 10:34:30.793615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:27.325 10:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:27.325 10:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:27.325 10:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:27.325 10:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:27.325 10:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:27.325 10:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:27.325 10:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:27.325 10:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:27.325 10:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:27.325 10:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:27.325 10:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.325 10:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.583 10:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.583 "name": "Existed_Raid", 00:19:27.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.583 "strip_size_kb": 0, 00:19:27.583 "state": "configuring", 00:19:27.583 "raid_level": "raid1", 00:19:27.583 "superblock": false, 00:19:27.583 "num_base_bdevs": 4, 00:19:27.583 "num_base_bdevs_discovered": 0, 00:19:27.583 "num_base_bdevs_operational": 4, 00:19:27.583 "base_bdevs_list": [ 00:19:27.583 { 00:19:27.583 "name": "BaseBdev1", 00:19:27.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.583 "is_configured": false, 00:19:27.583 "data_offset": 0, 00:19:27.583 "data_size": 0 00:19:27.583 }, 00:19:27.583 { 00:19:27.583 "name": "BaseBdev2", 00:19:27.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.583 "is_configured": false, 00:19:27.583 "data_offset": 0, 00:19:27.583 "data_size": 0 00:19:27.583 }, 00:19:27.583 { 00:19:27.583 "name": "BaseBdev3", 00:19:27.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.583 "is_configured": false, 00:19:27.583 "data_offset": 0, 00:19:27.583 "data_size": 0 00:19:27.583 }, 00:19:27.583 { 00:19:27.583 "name": "BaseBdev4", 00:19:27.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.583 "is_configured": false, 00:19:27.583 "data_offset": 0, 00:19:27.583 "data_size": 0 00:19:27.583 } 00:19:27.583 ] 00:19:27.583 }' 00:19:27.583 10:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.583 10:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.147 10:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:28.147 [2024-07-25 10:34:31.844173] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:28.147 [2024-07-25 10:34:31.844214] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15d2640 name Existed_Raid, state configuring 00:19:28.404 10:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:28.662 [2024-07-25 10:34:32.136925] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.662 [2024-07-25 10:34:32.136961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.662 [2024-07-25 10:34:32.136980] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.662 [2024-07-25 10:34:32.136994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.662 [2024-07-25 10:34:32.137004] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:28.662 [2024-07-25 10:34:32.137016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:28.662 [2024-07-25 10:34:32.137025] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:28.662 [2024-07-25 10:34:32.137037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:28.662 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:28.920 [2024-07-25 10:34:32.445793] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.920 BaseBdev1 00:19:28.920 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:28.920 10:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:28.920 10:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:28.920 10:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:28.920 10:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:28.920 10:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:28.920 10:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:29.177 10:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:29.434 [ 00:19:29.434 { 00:19:29.434 "name": "BaseBdev1", 00:19:29.434 "aliases": [ 00:19:29.434 "43845a66-f554-4904-8f2a-c341ff774dc1" 00:19:29.434 ], 00:19:29.434 "product_name": "Malloc disk", 00:19:29.434 "block_size": 512, 00:19:29.434 "num_blocks": 65536, 00:19:29.434 "uuid": "43845a66-f554-4904-8f2a-c341ff774dc1", 00:19:29.434 "assigned_rate_limits": { 00:19:29.434 "rw_ios_per_sec": 0, 00:19:29.434 "rw_mbytes_per_sec": 0, 00:19:29.434 "r_mbytes_per_sec": 0, 00:19:29.434 "w_mbytes_per_sec": 0 00:19:29.434 }, 00:19:29.434 "claimed": true, 00:19:29.434 "claim_type": "exclusive_write", 00:19:29.434 "zoned": false, 00:19:29.434 "supported_io_types": { 00:19:29.434 "read": true, 00:19:29.434 "write": true, 00:19:29.434 "unmap": true, 00:19:29.434 "flush": true, 00:19:29.434 "reset": true, 00:19:29.434 "nvme_admin": false, 00:19:29.434 "nvme_io": false, 00:19:29.434 "nvme_io_md": false, 00:19:29.434 "write_zeroes": true, 00:19:29.434 "zcopy": true, 00:19:29.434 "get_zone_info": false, 00:19:29.434 "zone_management": false, 00:19:29.434 "zone_append": false, 00:19:29.434 "compare": false, 00:19:29.434 "compare_and_write": false, 00:19:29.434 "abort": true, 00:19:29.434 "seek_hole": false, 00:19:29.434 "seek_data": false, 00:19:29.434 "copy": true, 00:19:29.434 "nvme_iov_md": false 00:19:29.434 }, 00:19:29.434 "memory_domains": [ 00:19:29.434 { 00:19:29.434 "dma_device_id": "system", 00:19:29.434 "dma_device_type": 1 00:19:29.434 }, 00:19:29.434 { 00:19:29.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.434 "dma_device_type": 2 00:19:29.434 } 00:19:29.434 ], 00:19:29.434 "driver_specific": {} 00:19:29.434 } 00:19:29.434 ] 00:19:29.434 10:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:29.434 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:29.434 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:29.434 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:29.434 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:29.434 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:29.434 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:29.434 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:29.434 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:29.435 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:29.435 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:29.435 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.435 10:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.692 10:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.692 "name": "Existed_Raid", 00:19:29.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.692 "strip_size_kb": 0, 00:19:29.692 "state": "configuring", 00:19:29.692 "raid_level": "raid1", 00:19:29.692 "superblock": false, 00:19:29.692 "num_base_bdevs": 4, 00:19:29.692 "num_base_bdevs_discovered": 1, 00:19:29.692 "num_base_bdevs_operational": 4, 00:19:29.692 "base_bdevs_list": [ 00:19:29.692 { 00:19:29.692 "name": "BaseBdev1", 00:19:29.692 "uuid": "43845a66-f554-4904-8f2a-c341ff774dc1", 00:19:29.692 "is_configured": true, 00:19:29.692 "data_offset": 0, 00:19:29.692 "data_size": 65536 00:19:29.692 }, 00:19:29.692 { 00:19:29.692 "name": "BaseBdev2", 00:19:29.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.692 "is_configured": false, 00:19:29.692 "data_offset": 0, 00:19:29.692 "data_size": 0 00:19:29.692 }, 00:19:29.692 { 00:19:29.692 "name": "BaseBdev3", 00:19:29.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.692 "is_configured": false, 00:19:29.692 "data_offset": 0, 00:19:29.692 "data_size": 0 00:19:29.692 }, 00:19:29.692 { 00:19:29.692 "name": "BaseBdev4", 00:19:29.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.692 "is_configured": false, 00:19:29.693 "data_offset": 0, 00:19:29.693 "data_size": 0 00:19:29.693 } 00:19:29.693 ] 00:19:29.693 }' 00:19:29.693 10:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.693 10:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.258 10:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:30.515 [2024-07-25 10:34:34.050019] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:30.515 [2024-07-25 10:34:34.050073] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15d1e50 name Existed_Raid, state configuring 00:19:30.515 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:30.774 [2024-07-25 10:34:34.290686] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.774 [2024-07-25 10:34:34.292192] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:30.774 [2024-07-25 10:34:34.292226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:30.774 [2024-07-25 10:34:34.292238] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:30.774 [2024-07-25 10:34:34.292251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:30.774 [2024-07-25 10:34:34.292261] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:30.774 [2024-07-25 10:34:34.292274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.774 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.032 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:31.032 "name": "Existed_Raid", 00:19:31.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.032 "strip_size_kb": 0, 00:19:31.032 "state": "configuring", 00:19:31.032 "raid_level": "raid1", 00:19:31.032 "superblock": false, 00:19:31.032 "num_base_bdevs": 4, 00:19:31.032 "num_base_bdevs_discovered": 1, 00:19:31.032 "num_base_bdevs_operational": 4, 00:19:31.032 "base_bdevs_list": [ 00:19:31.032 { 00:19:31.032 "name": "BaseBdev1", 00:19:31.032 "uuid": "43845a66-f554-4904-8f2a-c341ff774dc1", 00:19:31.032 "is_configured": true, 00:19:31.032 "data_offset": 0, 00:19:31.032 "data_size": 65536 00:19:31.032 }, 00:19:31.032 { 00:19:31.032 "name": "BaseBdev2", 00:19:31.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.032 "is_configured": false, 00:19:31.032 "data_offset": 0, 00:19:31.032 "data_size": 0 00:19:31.032 }, 00:19:31.032 { 00:19:31.032 "name": "BaseBdev3", 00:19:31.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.032 "is_configured": false, 00:19:31.032 "data_offset": 0, 00:19:31.032 "data_size": 0 00:19:31.032 }, 00:19:31.032 { 00:19:31.032 "name": "BaseBdev4", 00:19:31.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.032 "is_configured": false, 00:19:31.032 "data_offset": 0, 00:19:31.032 "data_size": 0 00:19:31.032 } 00:19:31.032 ] 00:19:31.032 }' 00:19:31.032 10:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:31.032 10:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.597 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:31.855 [2024-07-25 10:34:35.340053] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:31.855 BaseBdev2 00:19:31.855 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:31.855 10:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:31.855 10:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:31.855 10:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:31.855 10:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:31.855 10:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:31.855 10:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:32.111 10:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:32.368 [ 00:19:32.368 { 00:19:32.368 "name": "BaseBdev2", 00:19:32.368 "aliases": [ 00:19:32.368 "7f882f5b-04c3-4119-983d-dd1d13ffb9b2" 00:19:32.368 ], 00:19:32.368 "product_name": "Malloc disk", 00:19:32.368 "block_size": 512, 00:19:32.368 "num_blocks": 65536, 00:19:32.368 "uuid": "7f882f5b-04c3-4119-983d-dd1d13ffb9b2", 00:19:32.368 "assigned_rate_limits": { 00:19:32.368 "rw_ios_per_sec": 0, 00:19:32.368 "rw_mbytes_per_sec": 0, 00:19:32.368 "r_mbytes_per_sec": 0, 00:19:32.368 "w_mbytes_per_sec": 0 00:19:32.368 }, 00:19:32.368 "claimed": true, 00:19:32.368 "claim_type": "exclusive_write", 00:19:32.368 "zoned": false, 00:19:32.368 "supported_io_types": { 00:19:32.368 "read": true, 00:19:32.368 "write": true, 00:19:32.368 "unmap": true, 00:19:32.368 "flush": true, 00:19:32.368 "reset": true, 00:19:32.368 "nvme_admin": false, 00:19:32.368 "nvme_io": false, 00:19:32.368 "nvme_io_md": false, 00:19:32.368 "write_zeroes": true, 00:19:32.368 "zcopy": true, 00:19:32.368 "get_zone_info": false, 00:19:32.368 "zone_management": false, 00:19:32.368 "zone_append": false, 00:19:32.368 "compare": false, 00:19:32.368 "compare_and_write": false, 00:19:32.368 "abort": true, 00:19:32.368 "seek_hole": false, 00:19:32.368 "seek_data": false, 00:19:32.368 "copy": true, 00:19:32.368 "nvme_iov_md": false 00:19:32.368 }, 00:19:32.368 "memory_domains": [ 00:19:32.368 { 00:19:32.368 "dma_device_id": "system", 00:19:32.368 "dma_device_type": 1 00:19:32.368 }, 00:19:32.368 { 00:19:32.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.368 "dma_device_type": 2 00:19:32.368 } 00:19:32.368 ], 00:19:32.368 "driver_specific": {} 00:19:32.368 } 00:19:32.368 ] 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.368 10:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.625 10:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.625 "name": "Existed_Raid", 00:19:32.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.625 "strip_size_kb": 0, 00:19:32.625 "state": "configuring", 00:19:32.625 "raid_level": "raid1", 00:19:32.625 "superblock": false, 00:19:32.625 "num_base_bdevs": 4, 00:19:32.625 "num_base_bdevs_discovered": 2, 00:19:32.625 "num_base_bdevs_operational": 4, 00:19:32.625 "base_bdevs_list": [ 00:19:32.625 { 00:19:32.625 "name": "BaseBdev1", 00:19:32.625 "uuid": "43845a66-f554-4904-8f2a-c341ff774dc1", 00:19:32.625 "is_configured": true, 00:19:32.625 "data_offset": 0, 00:19:32.625 "data_size": 65536 00:19:32.625 }, 00:19:32.625 { 00:19:32.625 "name": "BaseBdev2", 00:19:32.625 "uuid": "7f882f5b-04c3-4119-983d-dd1d13ffb9b2", 00:19:32.625 "is_configured": true, 00:19:32.625 "data_offset": 0, 00:19:32.625 "data_size": 65536 00:19:32.625 }, 00:19:32.625 { 00:19:32.625 "name": "BaseBdev3", 00:19:32.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.625 "is_configured": false, 00:19:32.625 "data_offset": 0, 00:19:32.625 "data_size": 0 00:19:32.625 }, 00:19:32.625 { 00:19:32.625 "name": "BaseBdev4", 00:19:32.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.625 "is_configured": false, 00:19:32.626 "data_offset": 0, 00:19:32.626 "data_size": 0 00:19:32.626 } 00:19:32.626 ] 00:19:32.626 }' 00:19:32.626 10:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.626 10:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.189 10:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:33.448 [2024-07-25 10:34:37.054349] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:33.448 BaseBdev3 00:19:33.448 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:33.448 10:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:33.448 10:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:33.448 10:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:33.448 10:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:33.448 10:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:33.448 10:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:33.707 10:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:33.965 [ 00:19:33.965 { 00:19:33.965 "name": "BaseBdev3", 00:19:33.965 "aliases": [ 00:19:33.965 "69f2efee-d319-4580-a885-cbf50ec4359b" 00:19:33.965 ], 00:19:33.965 "product_name": "Malloc disk", 00:19:33.965 "block_size": 512, 00:19:33.965 "num_blocks": 65536, 00:19:33.965 "uuid": "69f2efee-d319-4580-a885-cbf50ec4359b", 00:19:33.965 "assigned_rate_limits": { 00:19:33.965 "rw_ios_per_sec": 0, 00:19:33.965 "rw_mbytes_per_sec": 0, 00:19:33.965 "r_mbytes_per_sec": 0, 00:19:33.965 "w_mbytes_per_sec": 0 00:19:33.965 }, 00:19:33.965 "claimed": true, 00:19:33.965 "claim_type": "exclusive_write", 00:19:33.965 "zoned": false, 00:19:33.965 "supported_io_types": { 00:19:33.965 "read": true, 00:19:33.965 "write": true, 00:19:33.965 "unmap": true, 00:19:33.965 "flush": true, 00:19:33.965 "reset": true, 00:19:33.965 "nvme_admin": false, 00:19:33.965 "nvme_io": false, 00:19:33.965 "nvme_io_md": false, 00:19:33.965 "write_zeroes": true, 00:19:33.965 "zcopy": true, 00:19:33.965 "get_zone_info": false, 00:19:33.965 "zone_management": false, 00:19:33.965 "zone_append": false, 00:19:33.965 "compare": false, 00:19:33.965 "compare_and_write": false, 00:19:33.965 "abort": true, 00:19:33.965 "seek_hole": false, 00:19:33.965 "seek_data": false, 00:19:33.965 "copy": true, 00:19:33.965 "nvme_iov_md": false 00:19:33.965 }, 00:19:33.965 "memory_domains": [ 00:19:33.965 { 00:19:33.965 "dma_device_id": "system", 00:19:33.965 "dma_device_type": 1 00:19:33.965 }, 00:19:33.965 { 00:19:33.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.965 "dma_device_type": 2 00:19:33.965 } 00:19:33.965 ], 00:19:33.965 "driver_specific": {} 00:19:33.965 } 00:19:33.965 ] 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.965 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.223 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:34.223 "name": "Existed_Raid", 00:19:34.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.223 "strip_size_kb": 0, 00:19:34.223 "state": "configuring", 00:19:34.223 "raid_level": "raid1", 00:19:34.223 "superblock": false, 00:19:34.223 "num_base_bdevs": 4, 00:19:34.223 "num_base_bdevs_discovered": 3, 00:19:34.223 "num_base_bdevs_operational": 4, 00:19:34.223 "base_bdevs_list": [ 00:19:34.223 { 00:19:34.223 "name": "BaseBdev1", 00:19:34.223 "uuid": "43845a66-f554-4904-8f2a-c341ff774dc1", 00:19:34.223 "is_configured": true, 00:19:34.223 "data_offset": 0, 00:19:34.223 "data_size": 65536 00:19:34.223 }, 00:19:34.223 { 00:19:34.223 "name": "BaseBdev2", 00:19:34.223 "uuid": "7f882f5b-04c3-4119-983d-dd1d13ffb9b2", 00:19:34.223 "is_configured": true, 00:19:34.223 "data_offset": 0, 00:19:34.223 "data_size": 65536 00:19:34.223 }, 00:19:34.223 { 00:19:34.223 "name": "BaseBdev3", 00:19:34.223 "uuid": "69f2efee-d319-4580-a885-cbf50ec4359b", 00:19:34.223 "is_configured": true, 00:19:34.223 "data_offset": 0, 00:19:34.223 "data_size": 65536 00:19:34.223 }, 00:19:34.223 { 00:19:34.223 "name": "BaseBdev4", 00:19:34.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.223 "is_configured": false, 00:19:34.223 "data_offset": 0, 00:19:34.223 "data_size": 0 00:19:34.223 } 00:19:34.223 ] 00:19:34.223 }' 00:19:34.223 10:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:34.223 10:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.789 10:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:35.046 [2024-07-25 10:34:38.652727] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:35.046 [2024-07-25 10:34:38.652788] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x15d2cb0 00:19:35.046 [2024-07-25 10:34:38.652799] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:35.046 [2024-07-25 10:34:38.653009] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17793a0 00:19:35.046 [2024-07-25 10:34:38.653198] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15d2cb0 00:19:35.046 [2024-07-25 10:34:38.653215] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15d2cb0 00:19:35.046 [2024-07-25 10:34:38.653432] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.046 BaseBdev4 00:19:35.046 10:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:19:35.046 10:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:35.046 10:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:35.046 10:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:35.046 10:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:35.046 10:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:35.046 10:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:35.333 10:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:35.590 [ 00:19:35.590 { 00:19:35.590 "name": "BaseBdev4", 00:19:35.590 "aliases": [ 00:19:35.590 "19c22847-59c7-420f-ae32-89dd8b3bcf2c" 00:19:35.590 ], 00:19:35.590 "product_name": "Malloc disk", 00:19:35.590 "block_size": 512, 00:19:35.590 "num_blocks": 65536, 00:19:35.590 "uuid": "19c22847-59c7-420f-ae32-89dd8b3bcf2c", 00:19:35.590 "assigned_rate_limits": { 00:19:35.590 "rw_ios_per_sec": 0, 00:19:35.590 "rw_mbytes_per_sec": 0, 00:19:35.590 "r_mbytes_per_sec": 0, 00:19:35.590 "w_mbytes_per_sec": 0 00:19:35.590 }, 00:19:35.590 "claimed": true, 00:19:35.590 "claim_type": "exclusive_write", 00:19:35.590 "zoned": false, 00:19:35.590 "supported_io_types": { 00:19:35.590 "read": true, 00:19:35.590 "write": true, 00:19:35.590 "unmap": true, 00:19:35.590 "flush": true, 00:19:35.590 "reset": true, 00:19:35.590 "nvme_admin": false, 00:19:35.590 "nvme_io": false, 00:19:35.590 "nvme_io_md": false, 00:19:35.590 "write_zeroes": true, 00:19:35.590 "zcopy": true, 00:19:35.590 "get_zone_info": false, 00:19:35.590 "zone_management": false, 00:19:35.590 "zone_append": false, 00:19:35.590 "compare": false, 00:19:35.590 "compare_and_write": false, 00:19:35.590 "abort": true, 00:19:35.590 "seek_hole": false, 00:19:35.590 "seek_data": false, 00:19:35.590 "copy": true, 00:19:35.590 "nvme_iov_md": false 00:19:35.590 }, 00:19:35.590 "memory_domains": [ 00:19:35.590 { 00:19:35.590 "dma_device_id": "system", 00:19:35.590 "dma_device_type": 1 00:19:35.590 }, 00:19:35.590 { 00:19:35.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.590 "dma_device_type": 2 00:19:35.590 } 00:19:35.590 ], 00:19:35.590 "driver_specific": {} 00:19:35.590 } 00:19:35.590 ] 00:19:35.590 10:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:35.590 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:35.590 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:35.590 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:35.590 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:35.591 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:35.591 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:35.591 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:35.591 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:35.591 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:35.591 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:35.591 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:35.591 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:35.591 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.591 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.848 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:35.848 "name": "Existed_Raid", 00:19:35.848 "uuid": "538a2ab2-de2f-4d38-98f6-fd35a9e39da7", 00:19:35.848 "strip_size_kb": 0, 00:19:35.848 "state": "online", 00:19:35.848 "raid_level": "raid1", 00:19:35.848 "superblock": false, 00:19:35.848 "num_base_bdevs": 4, 00:19:35.848 "num_base_bdevs_discovered": 4, 00:19:35.848 "num_base_bdevs_operational": 4, 00:19:35.848 "base_bdevs_list": [ 00:19:35.848 { 00:19:35.848 "name": "BaseBdev1", 00:19:35.848 "uuid": "43845a66-f554-4904-8f2a-c341ff774dc1", 00:19:35.848 "is_configured": true, 00:19:35.848 "data_offset": 0, 00:19:35.848 "data_size": 65536 00:19:35.848 }, 00:19:35.848 { 00:19:35.848 "name": "BaseBdev2", 00:19:35.848 "uuid": "7f882f5b-04c3-4119-983d-dd1d13ffb9b2", 00:19:35.848 "is_configured": true, 00:19:35.848 "data_offset": 0, 00:19:35.848 "data_size": 65536 00:19:35.848 }, 00:19:35.848 { 00:19:35.848 "name": "BaseBdev3", 00:19:35.848 "uuid": "69f2efee-d319-4580-a885-cbf50ec4359b", 00:19:35.848 "is_configured": true, 00:19:35.848 "data_offset": 0, 00:19:35.848 "data_size": 65536 00:19:35.848 }, 00:19:35.848 { 00:19:35.848 "name": "BaseBdev4", 00:19:35.848 "uuid": "19c22847-59c7-420f-ae32-89dd8b3bcf2c", 00:19:35.848 "is_configured": true, 00:19:35.848 "data_offset": 0, 00:19:35.848 "data_size": 65536 00:19:35.848 } 00:19:35.848 ] 00:19:35.848 }' 00:19:35.848 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:35.848 10:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.414 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:36.414 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:36.414 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:36.414 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:36.414 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:36.414 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:36.414 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:36.414 10:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:36.672 [2024-07-25 10:34:40.205173] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.672 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:36.672 "name": "Existed_Raid", 00:19:36.672 "aliases": [ 00:19:36.672 "538a2ab2-de2f-4d38-98f6-fd35a9e39da7" 00:19:36.672 ], 00:19:36.672 "product_name": "Raid Volume", 00:19:36.672 "block_size": 512, 00:19:36.672 "num_blocks": 65536, 00:19:36.672 "uuid": "538a2ab2-de2f-4d38-98f6-fd35a9e39da7", 00:19:36.672 "assigned_rate_limits": { 00:19:36.672 "rw_ios_per_sec": 0, 00:19:36.672 "rw_mbytes_per_sec": 0, 00:19:36.672 "r_mbytes_per_sec": 0, 00:19:36.672 "w_mbytes_per_sec": 0 00:19:36.672 }, 00:19:36.672 "claimed": false, 00:19:36.672 "zoned": false, 00:19:36.672 "supported_io_types": { 00:19:36.672 "read": true, 00:19:36.672 "write": true, 00:19:36.672 "unmap": false, 00:19:36.672 "flush": false, 00:19:36.672 "reset": true, 00:19:36.672 "nvme_admin": false, 00:19:36.672 "nvme_io": false, 00:19:36.672 "nvme_io_md": false, 00:19:36.672 "write_zeroes": true, 00:19:36.672 "zcopy": false, 00:19:36.672 "get_zone_info": false, 00:19:36.672 "zone_management": false, 00:19:36.672 "zone_append": false, 00:19:36.673 "compare": false, 00:19:36.673 "compare_and_write": false, 00:19:36.673 "abort": false, 00:19:36.673 "seek_hole": false, 00:19:36.673 "seek_data": false, 00:19:36.673 "copy": false, 00:19:36.673 "nvme_iov_md": false 00:19:36.673 }, 00:19:36.673 "memory_domains": [ 00:19:36.673 { 00:19:36.673 "dma_device_id": "system", 00:19:36.673 "dma_device_type": 1 00:19:36.673 }, 00:19:36.673 { 00:19:36.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.673 "dma_device_type": 2 00:19:36.673 }, 00:19:36.673 { 00:19:36.673 "dma_device_id": "system", 00:19:36.673 "dma_device_type": 1 00:19:36.673 }, 00:19:36.673 { 00:19:36.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.673 "dma_device_type": 2 00:19:36.673 }, 00:19:36.673 { 00:19:36.673 "dma_device_id": "system", 00:19:36.673 "dma_device_type": 1 00:19:36.673 }, 00:19:36.673 { 00:19:36.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.673 "dma_device_type": 2 00:19:36.673 }, 00:19:36.673 { 00:19:36.673 "dma_device_id": "system", 00:19:36.673 "dma_device_type": 1 00:19:36.673 }, 00:19:36.673 { 00:19:36.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.673 "dma_device_type": 2 00:19:36.673 } 00:19:36.673 ], 00:19:36.673 "driver_specific": { 00:19:36.673 "raid": { 00:19:36.673 "uuid": "538a2ab2-de2f-4d38-98f6-fd35a9e39da7", 00:19:36.673 "strip_size_kb": 0, 00:19:36.673 "state": "online", 00:19:36.673 "raid_level": "raid1", 00:19:36.673 "superblock": false, 00:19:36.673 "num_base_bdevs": 4, 00:19:36.673 "num_base_bdevs_discovered": 4, 00:19:36.673 "num_base_bdevs_operational": 4, 00:19:36.673 "base_bdevs_list": [ 00:19:36.673 { 00:19:36.673 "name": "BaseBdev1", 00:19:36.673 "uuid": "43845a66-f554-4904-8f2a-c341ff774dc1", 00:19:36.673 "is_configured": true, 00:19:36.673 "data_offset": 0, 00:19:36.673 "data_size": 65536 00:19:36.673 }, 00:19:36.673 { 00:19:36.673 "name": "BaseBdev2", 00:19:36.673 "uuid": "7f882f5b-04c3-4119-983d-dd1d13ffb9b2", 00:19:36.673 "is_configured": true, 00:19:36.673 "data_offset": 0, 00:19:36.673 "data_size": 65536 00:19:36.673 }, 00:19:36.673 { 00:19:36.673 "name": "BaseBdev3", 00:19:36.673 "uuid": "69f2efee-d319-4580-a885-cbf50ec4359b", 00:19:36.673 "is_configured": true, 00:19:36.673 "data_offset": 0, 00:19:36.673 "data_size": 65536 00:19:36.673 }, 00:19:36.673 { 00:19:36.673 "name": "BaseBdev4", 00:19:36.673 "uuid": "19c22847-59c7-420f-ae32-89dd8b3bcf2c", 00:19:36.673 "is_configured": true, 00:19:36.673 "data_offset": 0, 00:19:36.673 "data_size": 65536 00:19:36.673 } 00:19:36.673 ] 00:19:36.673 } 00:19:36.673 } 00:19:36.673 }' 00:19:36.673 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:36.673 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:36.673 BaseBdev2 00:19:36.673 BaseBdev3 00:19:36.673 BaseBdev4' 00:19:36.673 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:36.673 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:36.673 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:36.931 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:36.931 "name": "BaseBdev1", 00:19:36.931 "aliases": [ 00:19:36.931 "43845a66-f554-4904-8f2a-c341ff774dc1" 00:19:36.931 ], 00:19:36.931 "product_name": "Malloc disk", 00:19:36.931 "block_size": 512, 00:19:36.931 "num_blocks": 65536, 00:19:36.931 "uuid": "43845a66-f554-4904-8f2a-c341ff774dc1", 00:19:36.931 "assigned_rate_limits": { 00:19:36.931 "rw_ios_per_sec": 0, 00:19:36.931 "rw_mbytes_per_sec": 0, 00:19:36.931 "r_mbytes_per_sec": 0, 00:19:36.931 "w_mbytes_per_sec": 0 00:19:36.931 }, 00:19:36.931 "claimed": true, 00:19:36.931 "claim_type": "exclusive_write", 00:19:36.931 "zoned": false, 00:19:36.931 "supported_io_types": { 00:19:36.931 "read": true, 00:19:36.931 "write": true, 00:19:36.931 "unmap": true, 00:19:36.931 "flush": true, 00:19:36.931 "reset": true, 00:19:36.931 "nvme_admin": false, 00:19:36.931 "nvme_io": false, 00:19:36.931 "nvme_io_md": false, 00:19:36.931 "write_zeroes": true, 00:19:36.931 "zcopy": true, 00:19:36.931 "get_zone_info": false, 00:19:36.931 "zone_management": false, 00:19:36.931 "zone_append": false, 00:19:36.931 "compare": false, 00:19:36.931 "compare_and_write": false, 00:19:36.931 "abort": true, 00:19:36.931 "seek_hole": false, 00:19:36.931 "seek_data": false, 00:19:36.931 "copy": true, 00:19:36.931 "nvme_iov_md": false 00:19:36.931 }, 00:19:36.931 "memory_domains": [ 00:19:36.931 { 00:19:36.931 "dma_device_id": "system", 00:19:36.931 "dma_device_type": 1 00:19:36.931 }, 00:19:36.931 { 00:19:36.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.931 "dma_device_type": 2 00:19:36.931 } 00:19:36.931 ], 00:19:36.931 "driver_specific": {} 00:19:36.931 }' 00:19:36.931 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.931 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.931 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:36.931 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.931 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.188 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:37.188 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.188 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.188 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:37.188 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.188 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.188 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:37.188 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:37.188 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:37.188 10:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:37.446 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:37.446 "name": "BaseBdev2", 00:19:37.446 "aliases": [ 00:19:37.446 "7f882f5b-04c3-4119-983d-dd1d13ffb9b2" 00:19:37.446 ], 00:19:37.446 "product_name": "Malloc disk", 00:19:37.446 "block_size": 512, 00:19:37.446 "num_blocks": 65536, 00:19:37.446 "uuid": "7f882f5b-04c3-4119-983d-dd1d13ffb9b2", 00:19:37.446 "assigned_rate_limits": { 00:19:37.446 "rw_ios_per_sec": 0, 00:19:37.446 "rw_mbytes_per_sec": 0, 00:19:37.446 "r_mbytes_per_sec": 0, 00:19:37.446 "w_mbytes_per_sec": 0 00:19:37.446 }, 00:19:37.446 "claimed": true, 00:19:37.446 "claim_type": "exclusive_write", 00:19:37.446 "zoned": false, 00:19:37.446 "supported_io_types": { 00:19:37.446 "read": true, 00:19:37.446 "write": true, 00:19:37.446 "unmap": true, 00:19:37.446 "flush": true, 00:19:37.446 "reset": true, 00:19:37.446 "nvme_admin": false, 00:19:37.446 "nvme_io": false, 00:19:37.446 "nvme_io_md": false, 00:19:37.446 "write_zeroes": true, 00:19:37.446 "zcopy": true, 00:19:37.446 "get_zone_info": false, 00:19:37.446 "zone_management": false, 00:19:37.446 "zone_append": false, 00:19:37.446 "compare": false, 00:19:37.446 "compare_and_write": false, 00:19:37.446 "abort": true, 00:19:37.446 "seek_hole": false, 00:19:37.446 "seek_data": false, 00:19:37.446 "copy": true, 00:19:37.446 "nvme_iov_md": false 00:19:37.446 }, 00:19:37.446 "memory_domains": [ 00:19:37.446 { 00:19:37.446 "dma_device_id": "system", 00:19:37.446 "dma_device_type": 1 00:19:37.446 }, 00:19:37.446 { 00:19:37.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.446 "dma_device_type": 2 00:19:37.446 } 00:19:37.446 ], 00:19:37.446 "driver_specific": {} 00:19:37.446 }' 00:19:37.446 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.446 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.446 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:37.446 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.705 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.705 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:37.705 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.705 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.705 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:37.705 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.705 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.705 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:37.705 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:37.705 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:37.705 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:37.963 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:37.963 "name": "BaseBdev3", 00:19:37.963 "aliases": [ 00:19:37.963 "69f2efee-d319-4580-a885-cbf50ec4359b" 00:19:37.963 ], 00:19:37.963 "product_name": "Malloc disk", 00:19:37.963 "block_size": 512, 00:19:37.963 "num_blocks": 65536, 00:19:37.963 "uuid": "69f2efee-d319-4580-a885-cbf50ec4359b", 00:19:37.963 "assigned_rate_limits": { 00:19:37.963 "rw_ios_per_sec": 0, 00:19:37.963 "rw_mbytes_per_sec": 0, 00:19:37.963 "r_mbytes_per_sec": 0, 00:19:37.963 "w_mbytes_per_sec": 0 00:19:37.963 }, 00:19:37.963 "claimed": true, 00:19:37.963 "claim_type": "exclusive_write", 00:19:37.963 "zoned": false, 00:19:37.963 "supported_io_types": { 00:19:37.963 "read": true, 00:19:37.963 "write": true, 00:19:37.963 "unmap": true, 00:19:37.963 "flush": true, 00:19:37.963 "reset": true, 00:19:37.963 "nvme_admin": false, 00:19:37.963 "nvme_io": false, 00:19:37.963 "nvme_io_md": false, 00:19:37.963 "write_zeroes": true, 00:19:37.963 "zcopy": true, 00:19:37.963 "get_zone_info": false, 00:19:37.963 "zone_management": false, 00:19:37.963 "zone_append": false, 00:19:37.963 "compare": false, 00:19:37.963 "compare_and_write": false, 00:19:37.963 "abort": true, 00:19:37.963 "seek_hole": false, 00:19:37.963 "seek_data": false, 00:19:37.963 "copy": true, 00:19:37.963 "nvme_iov_md": false 00:19:37.963 }, 00:19:37.963 "memory_domains": [ 00:19:37.963 { 00:19:37.963 "dma_device_id": "system", 00:19:37.963 "dma_device_type": 1 00:19:37.963 }, 00:19:37.963 { 00:19:37.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.963 "dma_device_type": 2 00:19:37.963 } 00:19:37.963 ], 00:19:37.963 "driver_specific": {} 00:19:37.963 }' 00:19:37.963 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.963 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:38.221 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:38.221 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.221 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.221 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:38.221 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.221 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.221 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:38.221 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:38.221 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:38.221 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:38.221 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:38.221 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:38.221 10:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:38.479 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:38.479 "name": "BaseBdev4", 00:19:38.479 "aliases": [ 00:19:38.479 "19c22847-59c7-420f-ae32-89dd8b3bcf2c" 00:19:38.479 ], 00:19:38.479 "product_name": "Malloc disk", 00:19:38.479 "block_size": 512, 00:19:38.479 "num_blocks": 65536, 00:19:38.479 "uuid": "19c22847-59c7-420f-ae32-89dd8b3bcf2c", 00:19:38.479 "assigned_rate_limits": { 00:19:38.479 "rw_ios_per_sec": 0, 00:19:38.479 "rw_mbytes_per_sec": 0, 00:19:38.479 "r_mbytes_per_sec": 0, 00:19:38.479 "w_mbytes_per_sec": 0 00:19:38.479 }, 00:19:38.479 "claimed": true, 00:19:38.479 "claim_type": "exclusive_write", 00:19:38.479 "zoned": false, 00:19:38.479 "supported_io_types": { 00:19:38.479 "read": true, 00:19:38.479 "write": true, 00:19:38.479 "unmap": true, 00:19:38.479 "flush": true, 00:19:38.479 "reset": true, 00:19:38.479 "nvme_admin": false, 00:19:38.479 "nvme_io": false, 00:19:38.480 "nvme_io_md": false, 00:19:38.480 "write_zeroes": true, 00:19:38.480 "zcopy": true, 00:19:38.480 "get_zone_info": false, 00:19:38.480 "zone_management": false, 00:19:38.480 "zone_append": false, 00:19:38.480 "compare": false, 00:19:38.480 "compare_and_write": false, 00:19:38.480 "abort": true, 00:19:38.480 "seek_hole": false, 00:19:38.480 "seek_data": false, 00:19:38.480 "copy": true, 00:19:38.480 "nvme_iov_md": false 00:19:38.480 }, 00:19:38.480 "memory_domains": [ 00:19:38.480 { 00:19:38.480 "dma_device_id": "system", 00:19:38.480 "dma_device_type": 1 00:19:38.480 }, 00:19:38.480 { 00:19:38.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.480 "dma_device_type": 2 00:19:38.480 } 00:19:38.480 ], 00:19:38.480 "driver_specific": {} 00:19:38.480 }' 00:19:38.480 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:38.737 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:38.737 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:38.737 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.737 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.737 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:38.737 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.737 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.737 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:38.737 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:38.737 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:38.996 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:38.996 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:38.996 [2024-07-25 10:34:42.691528] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.254 "name": "Existed_Raid", 00:19:39.254 "uuid": "538a2ab2-de2f-4d38-98f6-fd35a9e39da7", 00:19:39.254 "strip_size_kb": 0, 00:19:39.254 "state": "online", 00:19:39.254 "raid_level": "raid1", 00:19:39.254 "superblock": false, 00:19:39.254 "num_base_bdevs": 4, 00:19:39.254 "num_base_bdevs_discovered": 3, 00:19:39.254 "num_base_bdevs_operational": 3, 00:19:39.254 "base_bdevs_list": [ 00:19:39.254 { 00:19:39.254 "name": null, 00:19:39.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.254 "is_configured": false, 00:19:39.254 "data_offset": 0, 00:19:39.254 "data_size": 65536 00:19:39.254 }, 00:19:39.254 { 00:19:39.254 "name": "BaseBdev2", 00:19:39.254 "uuid": "7f882f5b-04c3-4119-983d-dd1d13ffb9b2", 00:19:39.254 "is_configured": true, 00:19:39.254 "data_offset": 0, 00:19:39.254 "data_size": 65536 00:19:39.254 }, 00:19:39.254 { 00:19:39.254 "name": "BaseBdev3", 00:19:39.254 "uuid": "69f2efee-d319-4580-a885-cbf50ec4359b", 00:19:39.254 "is_configured": true, 00:19:39.254 "data_offset": 0, 00:19:39.254 "data_size": 65536 00:19:39.254 }, 00:19:39.254 { 00:19:39.254 "name": "BaseBdev4", 00:19:39.254 "uuid": "19c22847-59c7-420f-ae32-89dd8b3bcf2c", 00:19:39.254 "is_configured": true, 00:19:39.254 "data_offset": 0, 00:19:39.254 "data_size": 65536 00:19:39.254 } 00:19:39.254 ] 00:19:39.254 }' 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.254 10:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.830 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:39.830 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:39.830 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.830 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:40.091 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:40.091 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:40.091 10:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:40.348 [2024-07-25 10:34:43.988861] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:40.348 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:40.348 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:40.348 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.348 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:40.606 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:40.606 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:40.606 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:40.865 [2024-07-25 10:34:44.504726] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:40.865 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:40.865 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:40.865 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.865 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:41.123 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:41.123 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:41.123 10:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:41.381 [2024-07-25 10:34:45.018494] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:41.381 [2024-07-25 10:34:45.018598] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.381 [2024-07-25 10:34:45.031493] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.381 [2024-07-25 10:34:45.031567] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.381 [2024-07-25 10:34:45.031579] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15d2cb0 name Existed_Raid, state offline 00:19:41.381 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:41.381 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:41.381 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.381 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:41.639 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:41.639 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:41.639 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:41.639 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:41.639 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:41.639 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:41.897 BaseBdev2 00:19:41.897 10:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:41.897 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:41.897 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:41.897 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:41.897 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:41.897 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:41.897 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:42.155 10:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:42.414 [ 00:19:42.414 { 00:19:42.414 "name": "BaseBdev2", 00:19:42.414 "aliases": [ 00:19:42.414 "8710b901-b548-475f-96a4-c7f718a67745" 00:19:42.414 ], 00:19:42.414 "product_name": "Malloc disk", 00:19:42.414 "block_size": 512, 00:19:42.414 "num_blocks": 65536, 00:19:42.414 "uuid": "8710b901-b548-475f-96a4-c7f718a67745", 00:19:42.414 "assigned_rate_limits": { 00:19:42.414 "rw_ios_per_sec": 0, 00:19:42.414 "rw_mbytes_per_sec": 0, 00:19:42.414 "r_mbytes_per_sec": 0, 00:19:42.414 "w_mbytes_per_sec": 0 00:19:42.414 }, 00:19:42.414 "claimed": false, 00:19:42.414 "zoned": false, 00:19:42.414 "supported_io_types": { 00:19:42.414 "read": true, 00:19:42.414 "write": true, 00:19:42.414 "unmap": true, 00:19:42.414 "flush": true, 00:19:42.414 "reset": true, 00:19:42.414 "nvme_admin": false, 00:19:42.414 "nvme_io": false, 00:19:42.414 "nvme_io_md": false, 00:19:42.414 "write_zeroes": true, 00:19:42.414 "zcopy": true, 00:19:42.414 "get_zone_info": false, 00:19:42.414 "zone_management": false, 00:19:42.414 "zone_append": false, 00:19:42.414 "compare": false, 00:19:42.414 "compare_and_write": false, 00:19:42.414 "abort": true, 00:19:42.414 "seek_hole": false, 00:19:42.414 "seek_data": false, 00:19:42.414 "copy": true, 00:19:42.414 "nvme_iov_md": false 00:19:42.414 }, 00:19:42.414 "memory_domains": [ 00:19:42.414 { 00:19:42.414 "dma_device_id": "system", 00:19:42.414 "dma_device_type": 1 00:19:42.414 }, 00:19:42.414 { 00:19:42.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.414 "dma_device_type": 2 00:19:42.414 } 00:19:42.414 ], 00:19:42.414 "driver_specific": {} 00:19:42.414 } 00:19:42.414 ] 00:19:42.414 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:42.414 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:42.414 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:42.414 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:42.672 BaseBdev3 00:19:42.672 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:42.672 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:42.672 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:42.672 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:42.672 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:42.672 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:42.672 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:42.931 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:43.190 [ 00:19:43.190 { 00:19:43.190 "name": "BaseBdev3", 00:19:43.190 "aliases": [ 00:19:43.190 "26f50d8e-dc21-4602-a3cd-226573f5273f" 00:19:43.190 ], 00:19:43.190 "product_name": "Malloc disk", 00:19:43.190 "block_size": 512, 00:19:43.190 "num_blocks": 65536, 00:19:43.190 "uuid": "26f50d8e-dc21-4602-a3cd-226573f5273f", 00:19:43.190 "assigned_rate_limits": { 00:19:43.190 "rw_ios_per_sec": 0, 00:19:43.190 "rw_mbytes_per_sec": 0, 00:19:43.190 "r_mbytes_per_sec": 0, 00:19:43.190 "w_mbytes_per_sec": 0 00:19:43.190 }, 00:19:43.190 "claimed": false, 00:19:43.190 "zoned": false, 00:19:43.190 "supported_io_types": { 00:19:43.190 "read": true, 00:19:43.190 "write": true, 00:19:43.190 "unmap": true, 00:19:43.190 "flush": true, 00:19:43.190 "reset": true, 00:19:43.190 "nvme_admin": false, 00:19:43.190 "nvme_io": false, 00:19:43.190 "nvme_io_md": false, 00:19:43.190 "write_zeroes": true, 00:19:43.190 "zcopy": true, 00:19:43.190 "get_zone_info": false, 00:19:43.190 "zone_management": false, 00:19:43.190 "zone_append": false, 00:19:43.190 "compare": false, 00:19:43.190 "compare_and_write": false, 00:19:43.190 "abort": true, 00:19:43.190 "seek_hole": false, 00:19:43.190 "seek_data": false, 00:19:43.190 "copy": true, 00:19:43.190 "nvme_iov_md": false 00:19:43.190 }, 00:19:43.190 "memory_domains": [ 00:19:43.190 { 00:19:43.190 "dma_device_id": "system", 00:19:43.190 "dma_device_type": 1 00:19:43.190 }, 00:19:43.190 { 00:19:43.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.190 "dma_device_type": 2 00:19:43.190 } 00:19:43.190 ], 00:19:43.190 "driver_specific": {} 00:19:43.190 } 00:19:43.190 ] 00:19:43.190 10:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:43.190 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:43.190 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:43.190 10:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:43.449 BaseBdev4 00:19:43.449 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:43.449 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:43.449 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:43.449 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:43.449 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:43.449 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:43.449 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:43.706 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:43.965 [ 00:19:43.965 { 00:19:43.965 "name": "BaseBdev4", 00:19:43.965 "aliases": [ 00:19:43.965 "69cab96e-f708-4a7b-b7bf-f813c4576fe0" 00:19:43.965 ], 00:19:43.965 "product_name": "Malloc disk", 00:19:43.965 "block_size": 512, 00:19:43.965 "num_blocks": 65536, 00:19:43.965 "uuid": "69cab96e-f708-4a7b-b7bf-f813c4576fe0", 00:19:43.965 "assigned_rate_limits": { 00:19:43.965 "rw_ios_per_sec": 0, 00:19:43.965 "rw_mbytes_per_sec": 0, 00:19:43.965 "r_mbytes_per_sec": 0, 00:19:43.965 "w_mbytes_per_sec": 0 00:19:43.965 }, 00:19:43.965 "claimed": false, 00:19:43.965 "zoned": false, 00:19:43.965 "supported_io_types": { 00:19:43.965 "read": true, 00:19:43.965 "write": true, 00:19:43.965 "unmap": true, 00:19:43.965 "flush": true, 00:19:43.965 "reset": true, 00:19:43.965 "nvme_admin": false, 00:19:43.965 "nvme_io": false, 00:19:43.965 "nvme_io_md": false, 00:19:43.965 "write_zeroes": true, 00:19:43.965 "zcopy": true, 00:19:43.965 "get_zone_info": false, 00:19:43.965 "zone_management": false, 00:19:43.965 "zone_append": false, 00:19:43.965 "compare": false, 00:19:43.965 "compare_and_write": false, 00:19:43.965 "abort": true, 00:19:43.965 "seek_hole": false, 00:19:43.965 "seek_data": false, 00:19:43.965 "copy": true, 00:19:43.965 "nvme_iov_md": false 00:19:43.965 }, 00:19:43.965 "memory_domains": [ 00:19:43.965 { 00:19:43.965 "dma_device_id": "system", 00:19:43.965 "dma_device_type": 1 00:19:43.965 }, 00:19:43.965 { 00:19:43.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.965 "dma_device_type": 2 00:19:43.965 } 00:19:43.965 ], 00:19:43.965 "driver_specific": {} 00:19:43.965 } 00:19:43.965 ] 00:19:43.965 10:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:43.965 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:43.965 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:43.965 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:44.223 [2024-07-25 10:34:47.793350] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:44.223 [2024-07-25 10:34:47.793417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:44.223 [2024-07-25 10:34:47.793450] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:44.223 [2024-07-25 10:34:47.794775] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:44.223 [2024-07-25 10:34:47.794817] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:44.223 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:44.223 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:44.223 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:44.223 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:44.223 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:44.223 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:44.223 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:44.223 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:44.223 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:44.223 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:44.223 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.223 10:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.482 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:44.482 "name": "Existed_Raid", 00:19:44.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.482 "strip_size_kb": 0, 00:19:44.482 "state": "configuring", 00:19:44.482 "raid_level": "raid1", 00:19:44.482 "superblock": false, 00:19:44.482 "num_base_bdevs": 4, 00:19:44.482 "num_base_bdevs_discovered": 3, 00:19:44.482 "num_base_bdevs_operational": 4, 00:19:44.482 "base_bdevs_list": [ 00:19:44.482 { 00:19:44.482 "name": "BaseBdev1", 00:19:44.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.482 "is_configured": false, 00:19:44.482 "data_offset": 0, 00:19:44.482 "data_size": 0 00:19:44.482 }, 00:19:44.482 { 00:19:44.482 "name": "BaseBdev2", 00:19:44.482 "uuid": "8710b901-b548-475f-96a4-c7f718a67745", 00:19:44.482 "is_configured": true, 00:19:44.482 "data_offset": 0, 00:19:44.482 "data_size": 65536 00:19:44.482 }, 00:19:44.482 { 00:19:44.482 "name": "BaseBdev3", 00:19:44.482 "uuid": "26f50d8e-dc21-4602-a3cd-226573f5273f", 00:19:44.482 "is_configured": true, 00:19:44.482 "data_offset": 0, 00:19:44.482 "data_size": 65536 00:19:44.482 }, 00:19:44.482 { 00:19:44.482 "name": "BaseBdev4", 00:19:44.482 "uuid": "69cab96e-f708-4a7b-b7bf-f813c4576fe0", 00:19:44.482 "is_configured": true, 00:19:44.482 "data_offset": 0, 00:19:44.482 "data_size": 65536 00:19:44.482 } 00:19:44.482 ] 00:19:44.482 }' 00:19:44.482 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:44.482 10:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.047 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:45.305 [2024-07-25 10:34:48.824035] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:45.305 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:45.305 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:45.305 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:45.305 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:45.305 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:45.305 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:45.305 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.305 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.305 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.305 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.305 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.305 10:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.564 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:45.564 "name": "Existed_Raid", 00:19:45.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.564 "strip_size_kb": 0, 00:19:45.564 "state": "configuring", 00:19:45.564 "raid_level": "raid1", 00:19:45.564 "superblock": false, 00:19:45.564 "num_base_bdevs": 4, 00:19:45.564 "num_base_bdevs_discovered": 2, 00:19:45.564 "num_base_bdevs_operational": 4, 00:19:45.564 "base_bdevs_list": [ 00:19:45.564 { 00:19:45.564 "name": "BaseBdev1", 00:19:45.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.564 "is_configured": false, 00:19:45.564 "data_offset": 0, 00:19:45.564 "data_size": 0 00:19:45.564 }, 00:19:45.564 { 00:19:45.564 "name": null, 00:19:45.564 "uuid": "8710b901-b548-475f-96a4-c7f718a67745", 00:19:45.564 "is_configured": false, 00:19:45.564 "data_offset": 0, 00:19:45.564 "data_size": 65536 00:19:45.564 }, 00:19:45.564 { 00:19:45.564 "name": "BaseBdev3", 00:19:45.564 "uuid": "26f50d8e-dc21-4602-a3cd-226573f5273f", 00:19:45.564 "is_configured": true, 00:19:45.564 "data_offset": 0, 00:19:45.564 "data_size": 65536 00:19:45.564 }, 00:19:45.564 { 00:19:45.564 "name": "BaseBdev4", 00:19:45.564 "uuid": "69cab96e-f708-4a7b-b7bf-f813c4576fe0", 00:19:45.564 "is_configured": true, 00:19:45.564 "data_offset": 0, 00:19:45.564 "data_size": 65536 00:19:45.564 } 00:19:45.564 ] 00:19:45.564 }' 00:19:45.564 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:45.564 10:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.129 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.129 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:46.129 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:46.129 10:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:46.388 [2024-07-25 10:34:50.095181] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.388 BaseBdev1 00:19:46.645 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:46.645 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:46.645 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:46.645 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:46.645 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:46.645 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:46.645 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:46.645 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:46.903 [ 00:19:46.903 { 00:19:46.903 "name": "BaseBdev1", 00:19:46.903 "aliases": [ 00:19:46.903 "a35fc248-6e1b-4f5f-8472-a5b6ed195cb9" 00:19:46.903 ], 00:19:46.903 "product_name": "Malloc disk", 00:19:46.903 "block_size": 512, 00:19:46.903 "num_blocks": 65536, 00:19:46.903 "uuid": "a35fc248-6e1b-4f5f-8472-a5b6ed195cb9", 00:19:46.903 "assigned_rate_limits": { 00:19:46.903 "rw_ios_per_sec": 0, 00:19:46.903 "rw_mbytes_per_sec": 0, 00:19:46.903 "r_mbytes_per_sec": 0, 00:19:46.903 "w_mbytes_per_sec": 0 00:19:46.903 }, 00:19:46.903 "claimed": true, 00:19:46.903 "claim_type": "exclusive_write", 00:19:46.903 "zoned": false, 00:19:46.903 "supported_io_types": { 00:19:46.903 "read": true, 00:19:46.903 "write": true, 00:19:46.903 "unmap": true, 00:19:46.903 "flush": true, 00:19:46.903 "reset": true, 00:19:46.903 "nvme_admin": false, 00:19:46.903 "nvme_io": false, 00:19:46.903 "nvme_io_md": false, 00:19:46.903 "write_zeroes": true, 00:19:46.903 "zcopy": true, 00:19:46.903 "get_zone_info": false, 00:19:46.903 "zone_management": false, 00:19:46.903 "zone_append": false, 00:19:46.903 "compare": false, 00:19:46.903 "compare_and_write": false, 00:19:46.903 "abort": true, 00:19:46.903 "seek_hole": false, 00:19:46.903 "seek_data": false, 00:19:46.903 "copy": true, 00:19:46.903 "nvme_iov_md": false 00:19:46.903 }, 00:19:46.903 "memory_domains": [ 00:19:46.903 { 00:19:46.903 "dma_device_id": "system", 00:19:46.903 "dma_device_type": 1 00:19:46.903 }, 00:19:46.903 { 00:19:46.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.903 "dma_device_type": 2 00:19:46.903 } 00:19:46.903 ], 00:19:46.903 "driver_specific": {} 00:19:46.903 } 00:19:46.903 ] 00:19:46.903 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:46.903 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:46.903 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:46.903 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:46.903 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:46.903 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:46.903 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:46.903 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.903 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.903 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.903 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.903 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.903 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.161 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.161 "name": "Existed_Raid", 00:19:47.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.161 "strip_size_kb": 0, 00:19:47.161 "state": "configuring", 00:19:47.161 "raid_level": "raid1", 00:19:47.161 "superblock": false, 00:19:47.161 "num_base_bdevs": 4, 00:19:47.161 "num_base_bdevs_discovered": 3, 00:19:47.161 "num_base_bdevs_operational": 4, 00:19:47.161 "base_bdevs_list": [ 00:19:47.161 { 00:19:47.161 "name": "BaseBdev1", 00:19:47.161 "uuid": "a35fc248-6e1b-4f5f-8472-a5b6ed195cb9", 00:19:47.161 "is_configured": true, 00:19:47.161 "data_offset": 0, 00:19:47.161 "data_size": 65536 00:19:47.161 }, 00:19:47.161 { 00:19:47.161 "name": null, 00:19:47.161 "uuid": "8710b901-b548-475f-96a4-c7f718a67745", 00:19:47.161 "is_configured": false, 00:19:47.161 "data_offset": 0, 00:19:47.161 "data_size": 65536 00:19:47.161 }, 00:19:47.161 { 00:19:47.161 "name": "BaseBdev3", 00:19:47.161 "uuid": "26f50d8e-dc21-4602-a3cd-226573f5273f", 00:19:47.161 "is_configured": true, 00:19:47.161 "data_offset": 0, 00:19:47.161 "data_size": 65536 00:19:47.161 }, 00:19:47.161 { 00:19:47.161 "name": "BaseBdev4", 00:19:47.161 "uuid": "69cab96e-f708-4a7b-b7bf-f813c4576fe0", 00:19:47.161 "is_configured": true, 00:19:47.161 "data_offset": 0, 00:19:47.161 "data_size": 65536 00:19:47.161 } 00:19:47.162 ] 00:19:47.162 }' 00:19:47.162 10:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.162 10:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.728 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.728 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:47.986 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:47.986 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:48.244 [2024-07-25 10:34:51.875844] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:48.244 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:48.244 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:48.244 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:48.244 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:48.244 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:48.244 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:48.244 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.244 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.244 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.244 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.244 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.244 10:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.502 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.502 "name": "Existed_Raid", 00:19:48.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.502 "strip_size_kb": 0, 00:19:48.502 "state": "configuring", 00:19:48.502 "raid_level": "raid1", 00:19:48.502 "superblock": false, 00:19:48.502 "num_base_bdevs": 4, 00:19:48.502 "num_base_bdevs_discovered": 2, 00:19:48.502 "num_base_bdevs_operational": 4, 00:19:48.502 "base_bdevs_list": [ 00:19:48.502 { 00:19:48.502 "name": "BaseBdev1", 00:19:48.502 "uuid": "a35fc248-6e1b-4f5f-8472-a5b6ed195cb9", 00:19:48.502 "is_configured": true, 00:19:48.502 "data_offset": 0, 00:19:48.502 "data_size": 65536 00:19:48.502 }, 00:19:48.502 { 00:19:48.502 "name": null, 00:19:48.502 "uuid": "8710b901-b548-475f-96a4-c7f718a67745", 00:19:48.502 "is_configured": false, 00:19:48.502 "data_offset": 0, 00:19:48.502 "data_size": 65536 00:19:48.502 }, 00:19:48.502 { 00:19:48.502 "name": null, 00:19:48.502 "uuid": "26f50d8e-dc21-4602-a3cd-226573f5273f", 00:19:48.502 "is_configured": false, 00:19:48.502 "data_offset": 0, 00:19:48.502 "data_size": 65536 00:19:48.502 }, 00:19:48.502 { 00:19:48.502 "name": "BaseBdev4", 00:19:48.502 "uuid": "69cab96e-f708-4a7b-b7bf-f813c4576fe0", 00:19:48.502 "is_configured": true, 00:19:48.502 "data_offset": 0, 00:19:48.502 "data_size": 65536 00:19:48.502 } 00:19:48.502 ] 00:19:48.502 }' 00:19:48.502 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.502 10:34:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.069 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.069 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:49.362 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:49.362 10:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:49.653 [2024-07-25 10:34:53.175310] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:49.653 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:49.653 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:49.653 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:49.653 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:49.653 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:49.653 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:49.653 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:49.653 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:49.653 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:49.653 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:49.653 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.653 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.912 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.912 "name": "Existed_Raid", 00:19:49.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.912 "strip_size_kb": 0, 00:19:49.912 "state": "configuring", 00:19:49.912 "raid_level": "raid1", 00:19:49.912 "superblock": false, 00:19:49.912 "num_base_bdevs": 4, 00:19:49.912 "num_base_bdevs_discovered": 3, 00:19:49.912 "num_base_bdevs_operational": 4, 00:19:49.912 "base_bdevs_list": [ 00:19:49.912 { 00:19:49.912 "name": "BaseBdev1", 00:19:49.912 "uuid": "a35fc248-6e1b-4f5f-8472-a5b6ed195cb9", 00:19:49.912 "is_configured": true, 00:19:49.912 "data_offset": 0, 00:19:49.912 "data_size": 65536 00:19:49.912 }, 00:19:49.912 { 00:19:49.912 "name": null, 00:19:49.912 "uuid": "8710b901-b548-475f-96a4-c7f718a67745", 00:19:49.912 "is_configured": false, 00:19:49.912 "data_offset": 0, 00:19:49.912 "data_size": 65536 00:19:49.912 }, 00:19:49.912 { 00:19:49.912 "name": "BaseBdev3", 00:19:49.912 "uuid": "26f50d8e-dc21-4602-a3cd-226573f5273f", 00:19:49.912 "is_configured": true, 00:19:49.912 "data_offset": 0, 00:19:49.912 "data_size": 65536 00:19:49.912 }, 00:19:49.912 { 00:19:49.912 "name": "BaseBdev4", 00:19:49.912 "uuid": "69cab96e-f708-4a7b-b7bf-f813c4576fe0", 00:19:49.912 "is_configured": true, 00:19:49.912 "data_offset": 0, 00:19:49.912 "data_size": 65536 00:19:49.912 } 00:19:49.912 ] 00:19:49.912 }' 00:19:49.912 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.912 10:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.478 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.478 10:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:50.736 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:50.736 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:50.736 [2024-07-25 10:34:54.442694] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:50.994 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:50.994 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:50.994 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:50.994 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:50.994 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:50.994 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:50.994 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:50.994 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:50.994 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:50.994 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:50.994 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.994 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.254 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.254 "name": "Existed_Raid", 00:19:51.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.254 "strip_size_kb": 0, 00:19:51.254 "state": "configuring", 00:19:51.254 "raid_level": "raid1", 00:19:51.254 "superblock": false, 00:19:51.254 "num_base_bdevs": 4, 00:19:51.254 "num_base_bdevs_discovered": 2, 00:19:51.254 "num_base_bdevs_operational": 4, 00:19:51.254 "base_bdevs_list": [ 00:19:51.254 { 00:19:51.254 "name": null, 00:19:51.254 "uuid": "a35fc248-6e1b-4f5f-8472-a5b6ed195cb9", 00:19:51.254 "is_configured": false, 00:19:51.254 "data_offset": 0, 00:19:51.254 "data_size": 65536 00:19:51.254 }, 00:19:51.254 { 00:19:51.254 "name": null, 00:19:51.254 "uuid": "8710b901-b548-475f-96a4-c7f718a67745", 00:19:51.254 "is_configured": false, 00:19:51.254 "data_offset": 0, 00:19:51.254 "data_size": 65536 00:19:51.254 }, 00:19:51.254 { 00:19:51.254 "name": "BaseBdev3", 00:19:51.254 "uuid": "26f50d8e-dc21-4602-a3cd-226573f5273f", 00:19:51.254 "is_configured": true, 00:19:51.254 "data_offset": 0, 00:19:51.254 "data_size": 65536 00:19:51.254 }, 00:19:51.254 { 00:19:51.254 "name": "BaseBdev4", 00:19:51.254 "uuid": "69cab96e-f708-4a7b-b7bf-f813c4576fe0", 00:19:51.254 "is_configured": true, 00:19:51.254 "data_offset": 0, 00:19:51.254 "data_size": 65536 00:19:51.254 } 00:19:51.254 ] 00:19:51.254 }' 00:19:51.254 10:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.254 10:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.821 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.821 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:51.821 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:51.821 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:52.080 [2024-07-25 10:34:55.727820] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.080 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:52.080 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:52.080 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:52.080 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:52.080 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:52.080 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:52.080 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:52.080 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:52.080 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:52.080 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:52.080 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.080 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.338 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:52.338 "name": "Existed_Raid", 00:19:52.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.338 "strip_size_kb": 0, 00:19:52.338 "state": "configuring", 00:19:52.338 "raid_level": "raid1", 00:19:52.338 "superblock": false, 00:19:52.338 "num_base_bdevs": 4, 00:19:52.338 "num_base_bdevs_discovered": 3, 00:19:52.338 "num_base_bdevs_operational": 4, 00:19:52.338 "base_bdevs_list": [ 00:19:52.338 { 00:19:52.338 "name": null, 00:19:52.338 "uuid": "a35fc248-6e1b-4f5f-8472-a5b6ed195cb9", 00:19:52.338 "is_configured": false, 00:19:52.338 "data_offset": 0, 00:19:52.338 "data_size": 65536 00:19:52.338 }, 00:19:52.338 { 00:19:52.338 "name": "BaseBdev2", 00:19:52.338 "uuid": "8710b901-b548-475f-96a4-c7f718a67745", 00:19:52.338 "is_configured": true, 00:19:52.338 "data_offset": 0, 00:19:52.338 "data_size": 65536 00:19:52.338 }, 00:19:52.338 { 00:19:52.338 "name": "BaseBdev3", 00:19:52.338 "uuid": "26f50d8e-dc21-4602-a3cd-226573f5273f", 00:19:52.338 "is_configured": true, 00:19:52.338 "data_offset": 0, 00:19:52.338 "data_size": 65536 00:19:52.338 }, 00:19:52.338 { 00:19:52.338 "name": "BaseBdev4", 00:19:52.338 "uuid": "69cab96e-f708-4a7b-b7bf-f813c4576fe0", 00:19:52.338 "is_configured": true, 00:19:52.338 "data_offset": 0, 00:19:52.338 "data_size": 65536 00:19:52.338 } 00:19:52.338 ] 00:19:52.338 }' 00:19:52.338 10:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:52.338 10:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.905 10:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.905 10:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:53.164 10:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:53.164 10:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.164 10:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:53.423 10:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a35fc248-6e1b-4f5f-8472-a5b6ed195cb9 00:19:53.682 [2024-07-25 10:34:57.211688] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:53.682 [2024-07-25 10:34:57.211752] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1776990 00:19:53.682 [2024-07-25 10:34:57.211761] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:53.682 [2024-07-25 10:34:57.211948] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1779410 00:19:53.682 [2024-07-25 10:34:57.212111] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1776990 00:19:53.682 [2024-07-25 10:34:57.212124] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1776990 00:19:53.682 [2024-07-25 10:34:57.212338] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.682 NewBaseBdev 00:19:53.682 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:53.682 10:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:53.682 10:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:53.682 10:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:53.682 10:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:53.682 10:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:53.682 10:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:53.941 10:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:54.199 [ 00:19:54.199 { 00:19:54.199 "name": "NewBaseBdev", 00:19:54.199 "aliases": [ 00:19:54.199 "a35fc248-6e1b-4f5f-8472-a5b6ed195cb9" 00:19:54.199 ], 00:19:54.199 "product_name": "Malloc disk", 00:19:54.199 "block_size": 512, 00:19:54.199 "num_blocks": 65536, 00:19:54.199 "uuid": "a35fc248-6e1b-4f5f-8472-a5b6ed195cb9", 00:19:54.199 "assigned_rate_limits": { 00:19:54.199 "rw_ios_per_sec": 0, 00:19:54.199 "rw_mbytes_per_sec": 0, 00:19:54.199 "r_mbytes_per_sec": 0, 00:19:54.199 "w_mbytes_per_sec": 0 00:19:54.199 }, 00:19:54.199 "claimed": true, 00:19:54.199 "claim_type": "exclusive_write", 00:19:54.199 "zoned": false, 00:19:54.199 "supported_io_types": { 00:19:54.199 "read": true, 00:19:54.199 "write": true, 00:19:54.199 "unmap": true, 00:19:54.199 "flush": true, 00:19:54.199 "reset": true, 00:19:54.199 "nvme_admin": false, 00:19:54.199 "nvme_io": false, 00:19:54.199 "nvme_io_md": false, 00:19:54.199 "write_zeroes": true, 00:19:54.199 "zcopy": true, 00:19:54.199 "get_zone_info": false, 00:19:54.199 "zone_management": false, 00:19:54.200 "zone_append": false, 00:19:54.200 "compare": false, 00:19:54.200 "compare_and_write": false, 00:19:54.200 "abort": true, 00:19:54.200 "seek_hole": false, 00:19:54.200 "seek_data": false, 00:19:54.200 "copy": true, 00:19:54.200 "nvme_iov_md": false 00:19:54.200 }, 00:19:54.200 "memory_domains": [ 00:19:54.200 { 00:19:54.200 "dma_device_id": "system", 00:19:54.200 "dma_device_type": 1 00:19:54.200 }, 00:19:54.200 { 00:19:54.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.200 "dma_device_type": 2 00:19:54.200 } 00:19:54.200 ], 00:19:54.200 "driver_specific": {} 00:19:54.200 } 00:19:54.200 ] 00:19:54.200 10:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:54.200 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:54.200 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:54.200 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:54.200 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:54.200 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:54.200 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:54.200 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.200 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.200 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.200 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.200 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.200 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.457 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:54.457 "name": "Existed_Raid", 00:19:54.457 "uuid": "e538fed2-efd7-42c8-9317-5b6828e4f3f0", 00:19:54.457 "strip_size_kb": 0, 00:19:54.457 "state": "online", 00:19:54.457 "raid_level": "raid1", 00:19:54.457 "superblock": false, 00:19:54.457 "num_base_bdevs": 4, 00:19:54.457 "num_base_bdevs_discovered": 4, 00:19:54.457 "num_base_bdevs_operational": 4, 00:19:54.457 "base_bdevs_list": [ 00:19:54.457 { 00:19:54.457 "name": "NewBaseBdev", 00:19:54.457 "uuid": "a35fc248-6e1b-4f5f-8472-a5b6ed195cb9", 00:19:54.457 "is_configured": true, 00:19:54.457 "data_offset": 0, 00:19:54.457 "data_size": 65536 00:19:54.457 }, 00:19:54.457 { 00:19:54.457 "name": "BaseBdev2", 00:19:54.457 "uuid": "8710b901-b548-475f-96a4-c7f718a67745", 00:19:54.457 "is_configured": true, 00:19:54.457 "data_offset": 0, 00:19:54.457 "data_size": 65536 00:19:54.457 }, 00:19:54.457 { 00:19:54.457 "name": "BaseBdev3", 00:19:54.457 "uuid": "26f50d8e-dc21-4602-a3cd-226573f5273f", 00:19:54.457 "is_configured": true, 00:19:54.457 "data_offset": 0, 00:19:54.457 "data_size": 65536 00:19:54.457 }, 00:19:54.457 { 00:19:54.457 "name": "BaseBdev4", 00:19:54.457 "uuid": "69cab96e-f708-4a7b-b7bf-f813c4576fe0", 00:19:54.457 "is_configured": true, 00:19:54.457 "data_offset": 0, 00:19:54.457 "data_size": 65536 00:19:54.457 } 00:19:54.457 ] 00:19:54.457 }' 00:19:54.457 10:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:54.457 10:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:55.033 [2024-07-25 10:34:58.683768] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:55.033 "name": "Existed_Raid", 00:19:55.033 "aliases": [ 00:19:55.033 "e538fed2-efd7-42c8-9317-5b6828e4f3f0" 00:19:55.033 ], 00:19:55.033 "product_name": "Raid Volume", 00:19:55.033 "block_size": 512, 00:19:55.033 "num_blocks": 65536, 00:19:55.033 "uuid": "e538fed2-efd7-42c8-9317-5b6828e4f3f0", 00:19:55.033 "assigned_rate_limits": { 00:19:55.033 "rw_ios_per_sec": 0, 00:19:55.033 "rw_mbytes_per_sec": 0, 00:19:55.033 "r_mbytes_per_sec": 0, 00:19:55.033 "w_mbytes_per_sec": 0 00:19:55.033 }, 00:19:55.033 "claimed": false, 00:19:55.033 "zoned": false, 00:19:55.033 "supported_io_types": { 00:19:55.033 "read": true, 00:19:55.033 "write": true, 00:19:55.033 "unmap": false, 00:19:55.033 "flush": false, 00:19:55.033 "reset": true, 00:19:55.033 "nvme_admin": false, 00:19:55.033 "nvme_io": false, 00:19:55.033 "nvme_io_md": false, 00:19:55.033 "write_zeroes": true, 00:19:55.033 "zcopy": false, 00:19:55.033 "get_zone_info": false, 00:19:55.033 "zone_management": false, 00:19:55.033 "zone_append": false, 00:19:55.033 "compare": false, 00:19:55.033 "compare_and_write": false, 00:19:55.033 "abort": false, 00:19:55.033 "seek_hole": false, 00:19:55.033 "seek_data": false, 00:19:55.033 "copy": false, 00:19:55.033 "nvme_iov_md": false 00:19:55.033 }, 00:19:55.033 "memory_domains": [ 00:19:55.033 { 00:19:55.033 "dma_device_id": "system", 00:19:55.033 "dma_device_type": 1 00:19:55.033 }, 00:19:55.033 { 00:19:55.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.033 "dma_device_type": 2 00:19:55.033 }, 00:19:55.033 { 00:19:55.033 "dma_device_id": "system", 00:19:55.033 "dma_device_type": 1 00:19:55.033 }, 00:19:55.033 { 00:19:55.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.033 "dma_device_type": 2 00:19:55.033 }, 00:19:55.033 { 00:19:55.033 "dma_device_id": "system", 00:19:55.033 "dma_device_type": 1 00:19:55.033 }, 00:19:55.033 { 00:19:55.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.033 "dma_device_type": 2 00:19:55.033 }, 00:19:55.033 { 00:19:55.033 "dma_device_id": "system", 00:19:55.033 "dma_device_type": 1 00:19:55.033 }, 00:19:55.033 { 00:19:55.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.033 "dma_device_type": 2 00:19:55.033 } 00:19:55.033 ], 00:19:55.033 "driver_specific": { 00:19:55.033 "raid": { 00:19:55.033 "uuid": "e538fed2-efd7-42c8-9317-5b6828e4f3f0", 00:19:55.033 "strip_size_kb": 0, 00:19:55.033 "state": "online", 00:19:55.033 "raid_level": "raid1", 00:19:55.033 "superblock": false, 00:19:55.033 "num_base_bdevs": 4, 00:19:55.033 "num_base_bdevs_discovered": 4, 00:19:55.033 "num_base_bdevs_operational": 4, 00:19:55.033 "base_bdevs_list": [ 00:19:55.033 { 00:19:55.033 "name": "NewBaseBdev", 00:19:55.033 "uuid": "a35fc248-6e1b-4f5f-8472-a5b6ed195cb9", 00:19:55.033 "is_configured": true, 00:19:55.033 "data_offset": 0, 00:19:55.033 "data_size": 65536 00:19:55.033 }, 00:19:55.033 { 00:19:55.033 "name": "BaseBdev2", 00:19:55.033 "uuid": "8710b901-b548-475f-96a4-c7f718a67745", 00:19:55.033 "is_configured": true, 00:19:55.033 "data_offset": 0, 00:19:55.033 "data_size": 65536 00:19:55.033 }, 00:19:55.033 { 00:19:55.033 "name": "BaseBdev3", 00:19:55.033 "uuid": "26f50d8e-dc21-4602-a3cd-226573f5273f", 00:19:55.033 "is_configured": true, 00:19:55.033 "data_offset": 0, 00:19:55.033 "data_size": 65536 00:19:55.033 }, 00:19:55.033 { 00:19:55.033 "name": "BaseBdev4", 00:19:55.033 "uuid": "69cab96e-f708-4a7b-b7bf-f813c4576fe0", 00:19:55.033 "is_configured": true, 00:19:55.033 "data_offset": 0, 00:19:55.033 "data_size": 65536 00:19:55.033 } 00:19:55.033 ] 00:19:55.033 } 00:19:55.033 } 00:19:55.033 }' 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:55.033 BaseBdev2 00:19:55.033 BaseBdev3 00:19:55.033 BaseBdev4' 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:55.033 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:55.300 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:55.300 "name": "NewBaseBdev", 00:19:55.300 "aliases": [ 00:19:55.300 "a35fc248-6e1b-4f5f-8472-a5b6ed195cb9" 00:19:55.300 ], 00:19:55.300 "product_name": "Malloc disk", 00:19:55.300 "block_size": 512, 00:19:55.300 "num_blocks": 65536, 00:19:55.300 "uuid": "a35fc248-6e1b-4f5f-8472-a5b6ed195cb9", 00:19:55.300 "assigned_rate_limits": { 00:19:55.300 "rw_ios_per_sec": 0, 00:19:55.300 "rw_mbytes_per_sec": 0, 00:19:55.300 "r_mbytes_per_sec": 0, 00:19:55.300 "w_mbytes_per_sec": 0 00:19:55.300 }, 00:19:55.300 "claimed": true, 00:19:55.300 "claim_type": "exclusive_write", 00:19:55.300 "zoned": false, 00:19:55.300 "supported_io_types": { 00:19:55.300 "read": true, 00:19:55.300 "write": true, 00:19:55.300 "unmap": true, 00:19:55.300 "flush": true, 00:19:55.300 "reset": true, 00:19:55.300 "nvme_admin": false, 00:19:55.300 "nvme_io": false, 00:19:55.300 "nvme_io_md": false, 00:19:55.300 "write_zeroes": true, 00:19:55.300 "zcopy": true, 00:19:55.300 "get_zone_info": false, 00:19:55.300 "zone_management": false, 00:19:55.300 "zone_append": false, 00:19:55.300 "compare": false, 00:19:55.300 "compare_and_write": false, 00:19:55.300 "abort": true, 00:19:55.300 "seek_hole": false, 00:19:55.300 "seek_data": false, 00:19:55.300 "copy": true, 00:19:55.300 "nvme_iov_md": false 00:19:55.300 }, 00:19:55.300 "memory_domains": [ 00:19:55.300 { 00:19:55.300 "dma_device_id": "system", 00:19:55.300 "dma_device_type": 1 00:19:55.300 }, 00:19:55.300 { 00:19:55.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.300 "dma_device_type": 2 00:19:55.300 } 00:19:55.300 ], 00:19:55.300 "driver_specific": {} 00:19:55.300 }' 00:19:55.300 10:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:55.558 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:55.558 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:55.558 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:55.558 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:55.558 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:55.558 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.558 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.558 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:55.558 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.558 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.816 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:55.816 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:55.816 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:55.816 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:55.816 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:55.816 "name": "BaseBdev2", 00:19:55.816 "aliases": [ 00:19:55.816 "8710b901-b548-475f-96a4-c7f718a67745" 00:19:55.816 ], 00:19:55.816 "product_name": "Malloc disk", 00:19:55.816 "block_size": 512, 00:19:55.816 "num_blocks": 65536, 00:19:55.816 "uuid": "8710b901-b548-475f-96a4-c7f718a67745", 00:19:55.816 "assigned_rate_limits": { 00:19:55.817 "rw_ios_per_sec": 0, 00:19:55.817 "rw_mbytes_per_sec": 0, 00:19:55.817 "r_mbytes_per_sec": 0, 00:19:55.817 "w_mbytes_per_sec": 0 00:19:55.817 }, 00:19:55.817 "claimed": true, 00:19:55.817 "claim_type": "exclusive_write", 00:19:55.817 "zoned": false, 00:19:55.817 "supported_io_types": { 00:19:55.817 "read": true, 00:19:55.817 "write": true, 00:19:55.817 "unmap": true, 00:19:55.817 "flush": true, 00:19:55.817 "reset": true, 00:19:55.817 "nvme_admin": false, 00:19:55.817 "nvme_io": false, 00:19:55.817 "nvme_io_md": false, 00:19:55.817 "write_zeroes": true, 00:19:55.817 "zcopy": true, 00:19:55.817 "get_zone_info": false, 00:19:55.817 "zone_management": false, 00:19:55.817 "zone_append": false, 00:19:55.817 "compare": false, 00:19:55.817 "compare_and_write": false, 00:19:55.817 "abort": true, 00:19:55.817 "seek_hole": false, 00:19:55.817 "seek_data": false, 00:19:55.817 "copy": true, 00:19:55.817 "nvme_iov_md": false 00:19:55.817 }, 00:19:55.817 "memory_domains": [ 00:19:55.817 { 00:19:55.817 "dma_device_id": "system", 00:19:55.817 "dma_device_type": 1 00:19:55.817 }, 00:19:55.817 { 00:19:55.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.817 "dma_device_type": 2 00:19:55.817 } 00:19:55.817 ], 00:19:55.817 "driver_specific": {} 00:19:55.817 }' 00:19:55.817 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:56.075 10:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:56.333 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:56.333 "name": "BaseBdev3", 00:19:56.333 "aliases": [ 00:19:56.333 "26f50d8e-dc21-4602-a3cd-226573f5273f" 00:19:56.333 ], 00:19:56.333 "product_name": "Malloc disk", 00:19:56.333 "block_size": 512, 00:19:56.333 "num_blocks": 65536, 00:19:56.333 "uuid": "26f50d8e-dc21-4602-a3cd-226573f5273f", 00:19:56.333 "assigned_rate_limits": { 00:19:56.333 "rw_ios_per_sec": 0, 00:19:56.333 "rw_mbytes_per_sec": 0, 00:19:56.333 "r_mbytes_per_sec": 0, 00:19:56.333 "w_mbytes_per_sec": 0 00:19:56.333 }, 00:19:56.333 "claimed": true, 00:19:56.333 "claim_type": "exclusive_write", 00:19:56.333 "zoned": false, 00:19:56.333 "supported_io_types": { 00:19:56.333 "read": true, 00:19:56.333 "write": true, 00:19:56.333 "unmap": true, 00:19:56.333 "flush": true, 00:19:56.333 "reset": true, 00:19:56.333 "nvme_admin": false, 00:19:56.333 "nvme_io": false, 00:19:56.333 "nvme_io_md": false, 00:19:56.333 "write_zeroes": true, 00:19:56.333 "zcopy": true, 00:19:56.333 "get_zone_info": false, 00:19:56.333 "zone_management": false, 00:19:56.333 "zone_append": false, 00:19:56.333 "compare": false, 00:19:56.333 "compare_and_write": false, 00:19:56.333 "abort": true, 00:19:56.334 "seek_hole": false, 00:19:56.334 "seek_data": false, 00:19:56.334 "copy": true, 00:19:56.334 "nvme_iov_md": false 00:19:56.334 }, 00:19:56.334 "memory_domains": [ 00:19:56.334 { 00:19:56.334 "dma_device_id": "system", 00:19:56.334 "dma_device_type": 1 00:19:56.334 }, 00:19:56.334 { 00:19:56.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.334 "dma_device_type": 2 00:19:56.334 } 00:19:56.334 ], 00:19:56.334 "driver_specific": {} 00:19:56.334 }' 00:19:56.334 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.590 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.591 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:56.591 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.591 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.591 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:56.591 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.591 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.591 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:56.591 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.591 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.591 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:56.591 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:56.591 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:56.591 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:56.848 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:56.848 "name": "BaseBdev4", 00:19:56.848 "aliases": [ 00:19:56.848 "69cab96e-f708-4a7b-b7bf-f813c4576fe0" 00:19:56.848 ], 00:19:56.848 "product_name": "Malloc disk", 00:19:56.848 "block_size": 512, 00:19:56.848 "num_blocks": 65536, 00:19:56.848 "uuid": "69cab96e-f708-4a7b-b7bf-f813c4576fe0", 00:19:56.848 "assigned_rate_limits": { 00:19:56.849 "rw_ios_per_sec": 0, 00:19:56.849 "rw_mbytes_per_sec": 0, 00:19:56.849 "r_mbytes_per_sec": 0, 00:19:56.849 "w_mbytes_per_sec": 0 00:19:56.849 }, 00:19:56.849 "claimed": true, 00:19:56.849 "claim_type": "exclusive_write", 00:19:56.849 "zoned": false, 00:19:56.849 "supported_io_types": { 00:19:56.849 "read": true, 00:19:56.849 "write": true, 00:19:56.849 "unmap": true, 00:19:56.849 "flush": true, 00:19:56.849 "reset": true, 00:19:56.849 "nvme_admin": false, 00:19:56.849 "nvme_io": false, 00:19:56.849 "nvme_io_md": false, 00:19:56.849 "write_zeroes": true, 00:19:56.849 "zcopy": true, 00:19:56.849 "get_zone_info": false, 00:19:56.849 "zone_management": false, 00:19:56.849 "zone_append": false, 00:19:56.849 "compare": false, 00:19:56.849 "compare_and_write": false, 00:19:56.849 "abort": true, 00:19:56.849 "seek_hole": false, 00:19:56.849 "seek_data": false, 00:19:56.849 "copy": true, 00:19:56.849 "nvme_iov_md": false 00:19:56.849 }, 00:19:56.849 "memory_domains": [ 00:19:56.849 { 00:19:56.849 "dma_device_id": "system", 00:19:56.849 "dma_device_type": 1 00:19:56.849 }, 00:19:56.849 { 00:19:56.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.849 "dma_device_type": 2 00:19:56.849 } 00:19:56.849 ], 00:19:56.849 "driver_specific": {} 00:19:56.849 }' 00:19:56.849 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.849 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.106 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:57.106 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.106 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.106 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:57.106 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.106 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.106 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:57.106 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.106 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.106 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:57.106 10:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:57.364 [2024-07-25 10:35:01.013707] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:57.364 [2024-07-25 10:35:01.013729] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.365 [2024-07-25 10:35:01.013784] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.365 [2024-07-25 10:35:01.014031] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.365 [2024-07-25 10:35:01.014045] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1776990 name Existed_Raid, state offline 00:19:57.365 10:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2415722 00:19:57.365 10:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2415722 ']' 00:19:57.365 10:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2415722 00:19:57.365 10:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:19:57.365 10:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:57.365 10:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2415722 00:19:57.365 10:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:57.365 10:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:57.365 10:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2415722' 00:19:57.365 killing process with pid 2415722 00:19:57.365 10:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2415722 00:19:57.365 [2024-07-25 10:35:01.057020] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:57.365 10:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2415722 00:19:57.624 [2024-07-25 10:35:01.130778] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:57.883 10:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:57.883 00:19:57.883 real 0m32.097s 00:19:57.883 user 0m59.621s 00:19:57.883 sys 0m4.396s 00:19:57.883 10:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:57.883 10:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.883 ************************************ 00:19:57.883 END TEST raid_state_function_test 00:19:57.883 ************************************ 00:19:58.141 10:35:01 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:19:58.141 10:35:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:58.141 10:35:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:58.141 10:35:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:58.141 ************************************ 00:19:58.142 START TEST raid_state_function_test_sb 00:19:58.142 ************************************ 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2420164 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2420164' 00:19:58.142 Process raid pid: 2420164 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2420164 /var/tmp/spdk-raid.sock 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2420164 ']' 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:58.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.142 10:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.142 [2024-07-25 10:35:01.689632] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:19:58.142 [2024-07-25 10:35:01.689714] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.142 [2024-07-25 10:35:01.779218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.400 [2024-07-25 10:35:01.906646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.400 [2024-07-25 10:35:01.985099] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:58.400 [2024-07-25 10:35:01.985148] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:58.400 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.400 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:19:58.400 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:58.659 [2024-07-25 10:35:02.301315] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:58.659 [2024-07-25 10:35:02.301353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:58.659 [2024-07-25 10:35:02.301379] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:58.659 [2024-07-25 10:35:02.301390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:58.659 [2024-07-25 10:35:02.301398] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:58.659 [2024-07-25 10:35:02.301408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:58.659 [2024-07-25 10:35:02.301416] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:58.659 [2024-07-25 10:35:02.301441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:58.659 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:58.659 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:58.659 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:58.659 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:58.659 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:58.659 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:58.659 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:58.659 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:58.659 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:58.659 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:58.659 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.659 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.917 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:58.917 "name": "Existed_Raid", 00:19:58.917 "uuid": "440cd063-a861-4068-8f7b-3d4634bcd178", 00:19:58.917 "strip_size_kb": 0, 00:19:58.917 "state": "configuring", 00:19:58.917 "raid_level": "raid1", 00:19:58.918 "superblock": true, 00:19:58.918 "num_base_bdevs": 4, 00:19:58.918 "num_base_bdevs_discovered": 0, 00:19:58.918 "num_base_bdevs_operational": 4, 00:19:58.918 "base_bdevs_list": [ 00:19:58.918 { 00:19:58.918 "name": "BaseBdev1", 00:19:58.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.918 "is_configured": false, 00:19:58.918 "data_offset": 0, 00:19:58.918 "data_size": 0 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "name": "BaseBdev2", 00:19:58.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.918 "is_configured": false, 00:19:58.918 "data_offset": 0, 00:19:58.918 "data_size": 0 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "name": "BaseBdev3", 00:19:58.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.918 "is_configured": false, 00:19:58.918 "data_offset": 0, 00:19:58.918 "data_size": 0 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "name": "BaseBdev4", 00:19:58.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.918 "is_configured": false, 00:19:58.918 "data_offset": 0, 00:19:58.918 "data_size": 0 00:19:58.918 } 00:19:58.918 ] 00:19:58.918 }' 00:19:58.918 10:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:58.918 10:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.484 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:59.741 [2024-07-25 10:35:03.380067] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:59.742 [2024-07-25 10:35:03.380124] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21a6640 name Existed_Raid, state configuring 00:19:59.742 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:00.000 [2024-07-25 10:35:03.668863] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:00.000 [2024-07-25 10:35:03.668907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:00.000 [2024-07-25 10:35:03.668919] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:00.000 [2024-07-25 10:35:03.668932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:00.000 [2024-07-25 10:35:03.668941] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:00.000 [2024-07-25 10:35:03.668953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:00.000 [2024-07-25 10:35:03.668962] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:00.000 [2024-07-25 10:35:03.668974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:00.000 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:00.257 [2024-07-25 10:35:03.966028] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:00.515 BaseBdev1 00:20:00.515 10:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:00.515 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:00.515 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:00.515 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:00.515 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:00.515 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:00.515 10:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:00.773 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:01.032 [ 00:20:01.032 { 00:20:01.032 "name": "BaseBdev1", 00:20:01.032 "aliases": [ 00:20:01.032 "14b02a73-4e0d-4615-bceb-ebcb1df11cc6" 00:20:01.032 ], 00:20:01.032 "product_name": "Malloc disk", 00:20:01.032 "block_size": 512, 00:20:01.032 "num_blocks": 65536, 00:20:01.032 "uuid": "14b02a73-4e0d-4615-bceb-ebcb1df11cc6", 00:20:01.032 "assigned_rate_limits": { 00:20:01.032 "rw_ios_per_sec": 0, 00:20:01.032 "rw_mbytes_per_sec": 0, 00:20:01.032 "r_mbytes_per_sec": 0, 00:20:01.032 "w_mbytes_per_sec": 0 00:20:01.032 }, 00:20:01.032 "claimed": true, 00:20:01.032 "claim_type": "exclusive_write", 00:20:01.032 "zoned": false, 00:20:01.032 "supported_io_types": { 00:20:01.032 "read": true, 00:20:01.032 "write": true, 00:20:01.032 "unmap": true, 00:20:01.032 "flush": true, 00:20:01.032 "reset": true, 00:20:01.032 "nvme_admin": false, 00:20:01.032 "nvme_io": false, 00:20:01.032 "nvme_io_md": false, 00:20:01.032 "write_zeroes": true, 00:20:01.032 "zcopy": true, 00:20:01.032 "get_zone_info": false, 00:20:01.032 "zone_management": false, 00:20:01.032 "zone_append": false, 00:20:01.032 "compare": false, 00:20:01.032 "compare_and_write": false, 00:20:01.032 "abort": true, 00:20:01.032 "seek_hole": false, 00:20:01.032 "seek_data": false, 00:20:01.032 "copy": true, 00:20:01.032 "nvme_iov_md": false 00:20:01.032 }, 00:20:01.032 "memory_domains": [ 00:20:01.032 { 00:20:01.032 "dma_device_id": "system", 00:20:01.032 "dma_device_type": 1 00:20:01.032 }, 00:20:01.032 { 00:20:01.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.032 "dma_device_type": 2 00:20:01.032 } 00:20:01.032 ], 00:20:01.032 "driver_specific": {} 00:20:01.032 } 00:20:01.032 ] 00:20:01.032 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:01.032 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:01.032 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:01.032 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:01.032 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:01.032 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:01.032 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:01.032 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:01.032 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:01.032 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:01.032 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:01.032 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.032 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.291 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:01.291 "name": "Existed_Raid", 00:20:01.291 "uuid": "fc55fcfc-6588-445c-bbed-04a0f2260e93", 00:20:01.291 "strip_size_kb": 0, 00:20:01.291 "state": "configuring", 00:20:01.291 "raid_level": "raid1", 00:20:01.291 "superblock": true, 00:20:01.291 "num_base_bdevs": 4, 00:20:01.291 "num_base_bdevs_discovered": 1, 00:20:01.291 "num_base_bdevs_operational": 4, 00:20:01.291 "base_bdevs_list": [ 00:20:01.291 { 00:20:01.291 "name": "BaseBdev1", 00:20:01.291 "uuid": "14b02a73-4e0d-4615-bceb-ebcb1df11cc6", 00:20:01.291 "is_configured": true, 00:20:01.291 "data_offset": 2048, 00:20:01.291 "data_size": 63488 00:20:01.291 }, 00:20:01.291 { 00:20:01.291 "name": "BaseBdev2", 00:20:01.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.291 "is_configured": false, 00:20:01.291 "data_offset": 0, 00:20:01.291 "data_size": 0 00:20:01.291 }, 00:20:01.291 { 00:20:01.291 "name": "BaseBdev3", 00:20:01.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.291 "is_configured": false, 00:20:01.291 "data_offset": 0, 00:20:01.291 "data_size": 0 00:20:01.291 }, 00:20:01.291 { 00:20:01.291 "name": "BaseBdev4", 00:20:01.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.291 "is_configured": false, 00:20:01.291 "data_offset": 0, 00:20:01.291 "data_size": 0 00:20:01.291 } 00:20:01.291 ] 00:20:01.291 }' 00:20:01.291 10:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:01.291 10:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.856 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:01.856 [2024-07-25 10:35:05.550224] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:01.856 [2024-07-25 10:35:05.550278] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21a5e50 name Existed_Raid, state configuring 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:02.114 [2024-07-25 10:35:05.794911] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:02.114 [2024-07-25 10:35:05.796412] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:02.114 [2024-07-25 10:35:05.796447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:02.114 [2024-07-25 10:35:05.796460] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:02.114 [2024-07-25 10:35:05.796473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:02.114 [2024-07-25 10:35:05.796482] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:02.114 [2024-07-25 10:35:05.796495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.114 10:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.372 10:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.372 "name": "Existed_Raid", 00:20:02.372 "uuid": "cf4110e8-27b1-4a36-bca6-a4b58199daf2", 00:20:02.372 "strip_size_kb": 0, 00:20:02.372 "state": "configuring", 00:20:02.372 "raid_level": "raid1", 00:20:02.372 "superblock": true, 00:20:02.372 "num_base_bdevs": 4, 00:20:02.372 "num_base_bdevs_discovered": 1, 00:20:02.372 "num_base_bdevs_operational": 4, 00:20:02.372 "base_bdevs_list": [ 00:20:02.372 { 00:20:02.372 "name": "BaseBdev1", 00:20:02.372 "uuid": "14b02a73-4e0d-4615-bceb-ebcb1df11cc6", 00:20:02.372 "is_configured": true, 00:20:02.372 "data_offset": 2048, 00:20:02.372 "data_size": 63488 00:20:02.372 }, 00:20:02.372 { 00:20:02.372 "name": "BaseBdev2", 00:20:02.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.372 "is_configured": false, 00:20:02.372 "data_offset": 0, 00:20:02.372 "data_size": 0 00:20:02.372 }, 00:20:02.372 { 00:20:02.372 "name": "BaseBdev3", 00:20:02.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.372 "is_configured": false, 00:20:02.372 "data_offset": 0, 00:20:02.372 "data_size": 0 00:20:02.372 }, 00:20:02.372 { 00:20:02.372 "name": "BaseBdev4", 00:20:02.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.372 "is_configured": false, 00:20:02.372 "data_offset": 0, 00:20:02.372 "data_size": 0 00:20:02.372 } 00:20:02.372 ] 00:20:02.372 }' 00:20:02.372 10:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.372 10:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.938 10:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:03.196 [2024-07-25 10:35:06.856695] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:03.196 BaseBdev2 00:20:03.196 10:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:03.196 10:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:03.196 10:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:03.196 10:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:03.196 10:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:03.196 10:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:03.196 10:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:03.454 10:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:03.712 [ 00:20:03.712 { 00:20:03.712 "name": "BaseBdev2", 00:20:03.712 "aliases": [ 00:20:03.712 "e369b515-f5af-4efd-9e5b-96672608e52e" 00:20:03.712 ], 00:20:03.712 "product_name": "Malloc disk", 00:20:03.712 "block_size": 512, 00:20:03.712 "num_blocks": 65536, 00:20:03.712 "uuid": "e369b515-f5af-4efd-9e5b-96672608e52e", 00:20:03.712 "assigned_rate_limits": { 00:20:03.712 "rw_ios_per_sec": 0, 00:20:03.712 "rw_mbytes_per_sec": 0, 00:20:03.712 "r_mbytes_per_sec": 0, 00:20:03.712 "w_mbytes_per_sec": 0 00:20:03.712 }, 00:20:03.712 "claimed": true, 00:20:03.712 "claim_type": "exclusive_write", 00:20:03.712 "zoned": false, 00:20:03.712 "supported_io_types": { 00:20:03.712 "read": true, 00:20:03.712 "write": true, 00:20:03.712 "unmap": true, 00:20:03.712 "flush": true, 00:20:03.712 "reset": true, 00:20:03.712 "nvme_admin": false, 00:20:03.712 "nvme_io": false, 00:20:03.712 "nvme_io_md": false, 00:20:03.712 "write_zeroes": true, 00:20:03.712 "zcopy": true, 00:20:03.712 "get_zone_info": false, 00:20:03.712 "zone_management": false, 00:20:03.712 "zone_append": false, 00:20:03.712 "compare": false, 00:20:03.712 "compare_and_write": false, 00:20:03.712 "abort": true, 00:20:03.712 "seek_hole": false, 00:20:03.712 "seek_data": false, 00:20:03.712 "copy": true, 00:20:03.712 "nvme_iov_md": false 00:20:03.712 }, 00:20:03.712 "memory_domains": [ 00:20:03.712 { 00:20:03.712 "dma_device_id": "system", 00:20:03.712 "dma_device_type": 1 00:20:03.712 }, 00:20:03.712 { 00:20:03.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.712 "dma_device_type": 2 00:20:03.712 } 00:20:03.713 ], 00:20:03.713 "driver_specific": {} 00:20:03.713 } 00:20:03.713 ] 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.713 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.971 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:03.971 "name": "Existed_Raid", 00:20:03.971 "uuid": "cf4110e8-27b1-4a36-bca6-a4b58199daf2", 00:20:03.971 "strip_size_kb": 0, 00:20:03.971 "state": "configuring", 00:20:03.971 "raid_level": "raid1", 00:20:03.971 "superblock": true, 00:20:03.971 "num_base_bdevs": 4, 00:20:03.971 "num_base_bdevs_discovered": 2, 00:20:03.971 "num_base_bdevs_operational": 4, 00:20:03.971 "base_bdevs_list": [ 00:20:03.971 { 00:20:03.971 "name": "BaseBdev1", 00:20:03.971 "uuid": "14b02a73-4e0d-4615-bceb-ebcb1df11cc6", 00:20:03.971 "is_configured": true, 00:20:03.971 "data_offset": 2048, 00:20:03.971 "data_size": 63488 00:20:03.971 }, 00:20:03.971 { 00:20:03.971 "name": "BaseBdev2", 00:20:03.971 "uuid": "e369b515-f5af-4efd-9e5b-96672608e52e", 00:20:03.971 "is_configured": true, 00:20:03.971 "data_offset": 2048, 00:20:03.971 "data_size": 63488 00:20:03.971 }, 00:20:03.971 { 00:20:03.971 "name": "BaseBdev3", 00:20:03.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.971 "is_configured": false, 00:20:03.971 "data_offset": 0, 00:20:03.971 "data_size": 0 00:20:03.971 }, 00:20:03.971 { 00:20:03.971 "name": "BaseBdev4", 00:20:03.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.971 "is_configured": false, 00:20:03.971 "data_offset": 0, 00:20:03.971 "data_size": 0 00:20:03.971 } 00:20:03.971 ] 00:20:03.971 }' 00:20:03.971 10:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:03.971 10:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.538 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:04.796 [2024-07-25 10:35:08.450012] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.796 BaseBdev3 00:20:04.796 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:04.796 10:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:04.796 10:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:04.796 10:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:04.796 10:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:04.796 10:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:04.796 10:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:05.054 10:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:05.312 [ 00:20:05.312 { 00:20:05.312 "name": "BaseBdev3", 00:20:05.312 "aliases": [ 00:20:05.312 "83c34dd3-44d7-42ef-bc72-040cd4ff008d" 00:20:05.312 ], 00:20:05.312 "product_name": "Malloc disk", 00:20:05.312 "block_size": 512, 00:20:05.312 "num_blocks": 65536, 00:20:05.312 "uuid": "83c34dd3-44d7-42ef-bc72-040cd4ff008d", 00:20:05.312 "assigned_rate_limits": { 00:20:05.312 "rw_ios_per_sec": 0, 00:20:05.312 "rw_mbytes_per_sec": 0, 00:20:05.312 "r_mbytes_per_sec": 0, 00:20:05.312 "w_mbytes_per_sec": 0 00:20:05.312 }, 00:20:05.312 "claimed": true, 00:20:05.312 "claim_type": "exclusive_write", 00:20:05.312 "zoned": false, 00:20:05.312 "supported_io_types": { 00:20:05.312 "read": true, 00:20:05.312 "write": true, 00:20:05.312 "unmap": true, 00:20:05.312 "flush": true, 00:20:05.312 "reset": true, 00:20:05.312 "nvme_admin": false, 00:20:05.312 "nvme_io": false, 00:20:05.312 "nvme_io_md": false, 00:20:05.312 "write_zeroes": true, 00:20:05.312 "zcopy": true, 00:20:05.312 "get_zone_info": false, 00:20:05.312 "zone_management": false, 00:20:05.312 "zone_append": false, 00:20:05.312 "compare": false, 00:20:05.312 "compare_and_write": false, 00:20:05.312 "abort": true, 00:20:05.312 "seek_hole": false, 00:20:05.312 "seek_data": false, 00:20:05.312 "copy": true, 00:20:05.312 "nvme_iov_md": false 00:20:05.312 }, 00:20:05.312 "memory_domains": [ 00:20:05.312 { 00:20:05.312 "dma_device_id": "system", 00:20:05.312 "dma_device_type": 1 00:20:05.312 }, 00:20:05.312 { 00:20:05.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.312 "dma_device_type": 2 00:20:05.312 } 00:20:05.312 ], 00:20:05.312 "driver_specific": {} 00:20:05.312 } 00:20:05.312 ] 00:20:05.312 10:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:05.312 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:05.312 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:05.312 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:05.312 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:05.312 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:05.312 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:05.312 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:05.312 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:05.312 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:05.313 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:05.313 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:05.313 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:05.313 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.313 10:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.571 10:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:05.571 "name": "Existed_Raid", 00:20:05.571 "uuid": "cf4110e8-27b1-4a36-bca6-a4b58199daf2", 00:20:05.571 "strip_size_kb": 0, 00:20:05.571 "state": "configuring", 00:20:05.571 "raid_level": "raid1", 00:20:05.571 "superblock": true, 00:20:05.571 "num_base_bdevs": 4, 00:20:05.571 "num_base_bdevs_discovered": 3, 00:20:05.571 "num_base_bdevs_operational": 4, 00:20:05.571 "base_bdevs_list": [ 00:20:05.571 { 00:20:05.571 "name": "BaseBdev1", 00:20:05.571 "uuid": "14b02a73-4e0d-4615-bceb-ebcb1df11cc6", 00:20:05.571 "is_configured": true, 00:20:05.571 "data_offset": 2048, 00:20:05.571 "data_size": 63488 00:20:05.571 }, 00:20:05.571 { 00:20:05.571 "name": "BaseBdev2", 00:20:05.571 "uuid": "e369b515-f5af-4efd-9e5b-96672608e52e", 00:20:05.571 "is_configured": true, 00:20:05.571 "data_offset": 2048, 00:20:05.571 "data_size": 63488 00:20:05.571 }, 00:20:05.571 { 00:20:05.571 "name": "BaseBdev3", 00:20:05.571 "uuid": "83c34dd3-44d7-42ef-bc72-040cd4ff008d", 00:20:05.571 "is_configured": true, 00:20:05.571 "data_offset": 2048, 00:20:05.571 "data_size": 63488 00:20:05.571 }, 00:20:05.571 { 00:20:05.571 "name": "BaseBdev4", 00:20:05.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.571 "is_configured": false, 00:20:05.571 "data_offset": 0, 00:20:05.571 "data_size": 0 00:20:05.571 } 00:20:05.571 ] 00:20:05.571 }' 00:20:05.571 10:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:05.571 10:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.137 10:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:06.394 [2024-07-25 10:35:10.031892] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:06.394 [2024-07-25 10:35:10.032143] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x21a6cb0 00:20:06.395 [2024-07-25 10:35:10.032175] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:06.395 [2024-07-25 10:35:10.032337] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21bdc70 00:20:06.395 [2024-07-25 10:35:10.032511] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21a6cb0 00:20:06.395 [2024-07-25 10:35:10.032526] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x21a6cb0 00:20:06.395 [2024-07-25 10:35:10.032617] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.395 BaseBdev4 00:20:06.395 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:20:06.395 10:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:06.395 10:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:06.395 10:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:06.395 10:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:06.395 10:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:06.395 10:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:06.652 10:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:06.910 [ 00:20:06.910 { 00:20:06.910 "name": "BaseBdev4", 00:20:06.910 "aliases": [ 00:20:06.910 "86208417-513a-46d8-9e5e-1672026bc331" 00:20:06.910 ], 00:20:06.910 "product_name": "Malloc disk", 00:20:06.910 "block_size": 512, 00:20:06.910 "num_blocks": 65536, 00:20:06.910 "uuid": "86208417-513a-46d8-9e5e-1672026bc331", 00:20:06.910 "assigned_rate_limits": { 00:20:06.910 "rw_ios_per_sec": 0, 00:20:06.910 "rw_mbytes_per_sec": 0, 00:20:06.910 "r_mbytes_per_sec": 0, 00:20:06.910 "w_mbytes_per_sec": 0 00:20:06.910 }, 00:20:06.910 "claimed": true, 00:20:06.910 "claim_type": "exclusive_write", 00:20:06.910 "zoned": false, 00:20:06.910 "supported_io_types": { 00:20:06.910 "read": true, 00:20:06.910 "write": true, 00:20:06.910 "unmap": true, 00:20:06.910 "flush": true, 00:20:06.910 "reset": true, 00:20:06.910 "nvme_admin": false, 00:20:06.910 "nvme_io": false, 00:20:06.910 "nvme_io_md": false, 00:20:06.910 "write_zeroes": true, 00:20:06.910 "zcopy": true, 00:20:06.910 "get_zone_info": false, 00:20:06.910 "zone_management": false, 00:20:06.910 "zone_append": false, 00:20:06.910 "compare": false, 00:20:06.910 "compare_and_write": false, 00:20:06.910 "abort": true, 00:20:06.910 "seek_hole": false, 00:20:06.910 "seek_data": false, 00:20:06.910 "copy": true, 00:20:06.910 "nvme_iov_md": false 00:20:06.910 }, 00:20:06.910 "memory_domains": [ 00:20:06.910 { 00:20:06.910 "dma_device_id": "system", 00:20:06.910 "dma_device_type": 1 00:20:06.910 }, 00:20:06.910 { 00:20:06.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.910 "dma_device_type": 2 00:20:06.910 } 00:20:06.910 ], 00:20:06.910 "driver_specific": {} 00:20:06.910 } 00:20:06.910 ] 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.910 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.167 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:07.167 "name": "Existed_Raid", 00:20:07.167 "uuid": "cf4110e8-27b1-4a36-bca6-a4b58199daf2", 00:20:07.167 "strip_size_kb": 0, 00:20:07.167 "state": "online", 00:20:07.167 "raid_level": "raid1", 00:20:07.167 "superblock": true, 00:20:07.167 "num_base_bdevs": 4, 00:20:07.167 "num_base_bdevs_discovered": 4, 00:20:07.167 "num_base_bdevs_operational": 4, 00:20:07.167 "base_bdevs_list": [ 00:20:07.167 { 00:20:07.167 "name": "BaseBdev1", 00:20:07.167 "uuid": "14b02a73-4e0d-4615-bceb-ebcb1df11cc6", 00:20:07.167 "is_configured": true, 00:20:07.167 "data_offset": 2048, 00:20:07.167 "data_size": 63488 00:20:07.167 }, 00:20:07.167 { 00:20:07.167 "name": "BaseBdev2", 00:20:07.167 "uuid": "e369b515-f5af-4efd-9e5b-96672608e52e", 00:20:07.167 "is_configured": true, 00:20:07.167 "data_offset": 2048, 00:20:07.167 "data_size": 63488 00:20:07.167 }, 00:20:07.167 { 00:20:07.167 "name": "BaseBdev3", 00:20:07.167 "uuid": "83c34dd3-44d7-42ef-bc72-040cd4ff008d", 00:20:07.167 "is_configured": true, 00:20:07.167 "data_offset": 2048, 00:20:07.167 "data_size": 63488 00:20:07.167 }, 00:20:07.167 { 00:20:07.167 "name": "BaseBdev4", 00:20:07.167 "uuid": "86208417-513a-46d8-9e5e-1672026bc331", 00:20:07.167 "is_configured": true, 00:20:07.167 "data_offset": 2048, 00:20:07.167 "data_size": 63488 00:20:07.167 } 00:20:07.167 ] 00:20:07.167 }' 00:20:07.167 10:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:07.167 10:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.730 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:07.730 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:07.730 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:07.730 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:07.730 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:07.730 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:07.730 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:07.730 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:07.987 [2024-07-25 10:35:11.636484] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.987 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:07.987 "name": "Existed_Raid", 00:20:07.987 "aliases": [ 00:20:07.987 "cf4110e8-27b1-4a36-bca6-a4b58199daf2" 00:20:07.987 ], 00:20:07.987 "product_name": "Raid Volume", 00:20:07.987 "block_size": 512, 00:20:07.987 "num_blocks": 63488, 00:20:07.987 "uuid": "cf4110e8-27b1-4a36-bca6-a4b58199daf2", 00:20:07.987 "assigned_rate_limits": { 00:20:07.987 "rw_ios_per_sec": 0, 00:20:07.987 "rw_mbytes_per_sec": 0, 00:20:07.987 "r_mbytes_per_sec": 0, 00:20:07.987 "w_mbytes_per_sec": 0 00:20:07.987 }, 00:20:07.987 "claimed": false, 00:20:07.987 "zoned": false, 00:20:07.987 "supported_io_types": { 00:20:07.987 "read": true, 00:20:07.987 "write": true, 00:20:07.987 "unmap": false, 00:20:07.987 "flush": false, 00:20:07.987 "reset": true, 00:20:07.987 "nvme_admin": false, 00:20:07.987 "nvme_io": false, 00:20:07.987 "nvme_io_md": false, 00:20:07.987 "write_zeroes": true, 00:20:07.987 "zcopy": false, 00:20:07.987 "get_zone_info": false, 00:20:07.987 "zone_management": false, 00:20:07.987 "zone_append": false, 00:20:07.987 "compare": false, 00:20:07.987 "compare_and_write": false, 00:20:07.987 "abort": false, 00:20:07.987 "seek_hole": false, 00:20:07.987 "seek_data": false, 00:20:07.987 "copy": false, 00:20:07.987 "nvme_iov_md": false 00:20:07.987 }, 00:20:07.987 "memory_domains": [ 00:20:07.987 { 00:20:07.987 "dma_device_id": "system", 00:20:07.987 "dma_device_type": 1 00:20:07.987 }, 00:20:07.987 { 00:20:07.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.987 "dma_device_type": 2 00:20:07.987 }, 00:20:07.987 { 00:20:07.987 "dma_device_id": "system", 00:20:07.987 "dma_device_type": 1 00:20:07.987 }, 00:20:07.987 { 00:20:07.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.987 "dma_device_type": 2 00:20:07.987 }, 00:20:07.987 { 00:20:07.987 "dma_device_id": "system", 00:20:07.987 "dma_device_type": 1 00:20:07.987 }, 00:20:07.987 { 00:20:07.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.987 "dma_device_type": 2 00:20:07.987 }, 00:20:07.987 { 00:20:07.987 "dma_device_id": "system", 00:20:07.987 "dma_device_type": 1 00:20:07.987 }, 00:20:07.987 { 00:20:07.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.987 "dma_device_type": 2 00:20:07.987 } 00:20:07.987 ], 00:20:07.987 "driver_specific": { 00:20:07.987 "raid": { 00:20:07.987 "uuid": "cf4110e8-27b1-4a36-bca6-a4b58199daf2", 00:20:07.987 "strip_size_kb": 0, 00:20:07.987 "state": "online", 00:20:07.987 "raid_level": "raid1", 00:20:07.987 "superblock": true, 00:20:07.987 "num_base_bdevs": 4, 00:20:07.987 "num_base_bdevs_discovered": 4, 00:20:07.987 "num_base_bdevs_operational": 4, 00:20:07.987 "base_bdevs_list": [ 00:20:07.987 { 00:20:07.987 "name": "BaseBdev1", 00:20:07.987 "uuid": "14b02a73-4e0d-4615-bceb-ebcb1df11cc6", 00:20:07.987 "is_configured": true, 00:20:07.987 "data_offset": 2048, 00:20:07.987 "data_size": 63488 00:20:07.987 }, 00:20:07.987 { 00:20:07.987 "name": "BaseBdev2", 00:20:07.987 "uuid": "e369b515-f5af-4efd-9e5b-96672608e52e", 00:20:07.987 "is_configured": true, 00:20:07.987 "data_offset": 2048, 00:20:07.987 "data_size": 63488 00:20:07.987 }, 00:20:07.987 { 00:20:07.987 "name": "BaseBdev3", 00:20:07.987 "uuid": "83c34dd3-44d7-42ef-bc72-040cd4ff008d", 00:20:07.987 "is_configured": true, 00:20:07.987 "data_offset": 2048, 00:20:07.987 "data_size": 63488 00:20:07.987 }, 00:20:07.987 { 00:20:07.987 "name": "BaseBdev4", 00:20:07.987 "uuid": "86208417-513a-46d8-9e5e-1672026bc331", 00:20:07.987 "is_configured": true, 00:20:07.987 "data_offset": 2048, 00:20:07.987 "data_size": 63488 00:20:07.987 } 00:20:07.987 ] 00:20:07.987 } 00:20:07.987 } 00:20:07.987 }' 00:20:07.987 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:07.987 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:07.987 BaseBdev2 00:20:07.987 BaseBdev3 00:20:07.987 BaseBdev4' 00:20:07.987 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:07.987 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:07.987 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:08.245 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:08.245 "name": "BaseBdev1", 00:20:08.245 "aliases": [ 00:20:08.245 "14b02a73-4e0d-4615-bceb-ebcb1df11cc6" 00:20:08.245 ], 00:20:08.245 "product_name": "Malloc disk", 00:20:08.245 "block_size": 512, 00:20:08.245 "num_blocks": 65536, 00:20:08.245 "uuid": "14b02a73-4e0d-4615-bceb-ebcb1df11cc6", 00:20:08.245 "assigned_rate_limits": { 00:20:08.245 "rw_ios_per_sec": 0, 00:20:08.245 "rw_mbytes_per_sec": 0, 00:20:08.245 "r_mbytes_per_sec": 0, 00:20:08.245 "w_mbytes_per_sec": 0 00:20:08.245 }, 00:20:08.245 "claimed": true, 00:20:08.245 "claim_type": "exclusive_write", 00:20:08.245 "zoned": false, 00:20:08.245 "supported_io_types": { 00:20:08.245 "read": true, 00:20:08.245 "write": true, 00:20:08.245 "unmap": true, 00:20:08.245 "flush": true, 00:20:08.245 "reset": true, 00:20:08.245 "nvme_admin": false, 00:20:08.245 "nvme_io": false, 00:20:08.245 "nvme_io_md": false, 00:20:08.245 "write_zeroes": true, 00:20:08.245 "zcopy": true, 00:20:08.245 "get_zone_info": false, 00:20:08.245 "zone_management": false, 00:20:08.245 "zone_append": false, 00:20:08.245 "compare": false, 00:20:08.245 "compare_and_write": false, 00:20:08.245 "abort": true, 00:20:08.245 "seek_hole": false, 00:20:08.245 "seek_data": false, 00:20:08.245 "copy": true, 00:20:08.245 "nvme_iov_md": false 00:20:08.245 }, 00:20:08.245 "memory_domains": [ 00:20:08.245 { 00:20:08.245 "dma_device_id": "system", 00:20:08.245 "dma_device_type": 1 00:20:08.245 }, 00:20:08.245 { 00:20:08.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.245 "dma_device_type": 2 00:20:08.245 } 00:20:08.245 ], 00:20:08.245 "driver_specific": {} 00:20:08.245 }' 00:20:08.245 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.503 10:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.503 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:08.503 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.503 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.503 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:08.503 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.503 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.503 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:08.503 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.503 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.760 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:08.760 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:08.760 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:08.760 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:09.018 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:09.018 "name": "BaseBdev2", 00:20:09.018 "aliases": [ 00:20:09.018 "e369b515-f5af-4efd-9e5b-96672608e52e" 00:20:09.018 ], 00:20:09.018 "product_name": "Malloc disk", 00:20:09.018 "block_size": 512, 00:20:09.018 "num_blocks": 65536, 00:20:09.018 "uuid": "e369b515-f5af-4efd-9e5b-96672608e52e", 00:20:09.018 "assigned_rate_limits": { 00:20:09.018 "rw_ios_per_sec": 0, 00:20:09.018 "rw_mbytes_per_sec": 0, 00:20:09.018 "r_mbytes_per_sec": 0, 00:20:09.018 "w_mbytes_per_sec": 0 00:20:09.018 }, 00:20:09.018 "claimed": true, 00:20:09.018 "claim_type": "exclusive_write", 00:20:09.018 "zoned": false, 00:20:09.018 "supported_io_types": { 00:20:09.018 "read": true, 00:20:09.018 "write": true, 00:20:09.018 "unmap": true, 00:20:09.018 "flush": true, 00:20:09.018 "reset": true, 00:20:09.018 "nvme_admin": false, 00:20:09.018 "nvme_io": false, 00:20:09.018 "nvme_io_md": false, 00:20:09.018 "write_zeroes": true, 00:20:09.018 "zcopy": true, 00:20:09.018 "get_zone_info": false, 00:20:09.018 "zone_management": false, 00:20:09.018 "zone_append": false, 00:20:09.018 "compare": false, 00:20:09.018 "compare_and_write": false, 00:20:09.018 "abort": true, 00:20:09.018 "seek_hole": false, 00:20:09.018 "seek_data": false, 00:20:09.018 "copy": true, 00:20:09.018 "nvme_iov_md": false 00:20:09.018 }, 00:20:09.018 "memory_domains": [ 00:20:09.018 { 00:20:09.018 "dma_device_id": "system", 00:20:09.018 "dma_device_type": 1 00:20:09.018 }, 00:20:09.018 { 00:20:09.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.018 "dma_device_type": 2 00:20:09.018 } 00:20:09.018 ], 00:20:09.018 "driver_specific": {} 00:20:09.018 }' 00:20:09.018 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:09.018 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:09.018 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:09.018 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:09.018 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:09.018 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:09.018 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.018 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.018 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:09.018 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.276 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.276 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:09.276 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:09.276 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:09.276 10:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:09.533 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:09.533 "name": "BaseBdev3", 00:20:09.533 "aliases": [ 00:20:09.533 "83c34dd3-44d7-42ef-bc72-040cd4ff008d" 00:20:09.533 ], 00:20:09.533 "product_name": "Malloc disk", 00:20:09.533 "block_size": 512, 00:20:09.533 "num_blocks": 65536, 00:20:09.534 "uuid": "83c34dd3-44d7-42ef-bc72-040cd4ff008d", 00:20:09.534 "assigned_rate_limits": { 00:20:09.534 "rw_ios_per_sec": 0, 00:20:09.534 "rw_mbytes_per_sec": 0, 00:20:09.534 "r_mbytes_per_sec": 0, 00:20:09.534 "w_mbytes_per_sec": 0 00:20:09.534 }, 00:20:09.534 "claimed": true, 00:20:09.534 "claim_type": "exclusive_write", 00:20:09.534 "zoned": false, 00:20:09.534 "supported_io_types": { 00:20:09.534 "read": true, 00:20:09.534 "write": true, 00:20:09.534 "unmap": true, 00:20:09.534 "flush": true, 00:20:09.534 "reset": true, 00:20:09.534 "nvme_admin": false, 00:20:09.534 "nvme_io": false, 00:20:09.534 "nvme_io_md": false, 00:20:09.534 "write_zeroes": true, 00:20:09.534 "zcopy": true, 00:20:09.534 "get_zone_info": false, 00:20:09.534 "zone_management": false, 00:20:09.534 "zone_append": false, 00:20:09.534 "compare": false, 00:20:09.534 "compare_and_write": false, 00:20:09.534 "abort": true, 00:20:09.534 "seek_hole": false, 00:20:09.534 "seek_data": false, 00:20:09.534 "copy": true, 00:20:09.534 "nvme_iov_md": false 00:20:09.534 }, 00:20:09.534 "memory_domains": [ 00:20:09.534 { 00:20:09.534 "dma_device_id": "system", 00:20:09.534 "dma_device_type": 1 00:20:09.534 }, 00:20:09.534 { 00:20:09.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.534 "dma_device_type": 2 00:20:09.534 } 00:20:09.534 ], 00:20:09.534 "driver_specific": {} 00:20:09.534 }' 00:20:09.534 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:09.534 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:09.534 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:09.534 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:09.534 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:09.534 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:09.534 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.534 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.790 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:09.790 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.790 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.790 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:09.790 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:09.790 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:09.790 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:10.068 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:10.068 "name": "BaseBdev4", 00:20:10.068 "aliases": [ 00:20:10.068 "86208417-513a-46d8-9e5e-1672026bc331" 00:20:10.068 ], 00:20:10.068 "product_name": "Malloc disk", 00:20:10.068 "block_size": 512, 00:20:10.068 "num_blocks": 65536, 00:20:10.068 "uuid": "86208417-513a-46d8-9e5e-1672026bc331", 00:20:10.068 "assigned_rate_limits": { 00:20:10.068 "rw_ios_per_sec": 0, 00:20:10.068 "rw_mbytes_per_sec": 0, 00:20:10.068 "r_mbytes_per_sec": 0, 00:20:10.068 "w_mbytes_per_sec": 0 00:20:10.068 }, 00:20:10.068 "claimed": true, 00:20:10.068 "claim_type": "exclusive_write", 00:20:10.068 "zoned": false, 00:20:10.068 "supported_io_types": { 00:20:10.068 "read": true, 00:20:10.068 "write": true, 00:20:10.068 "unmap": true, 00:20:10.068 "flush": true, 00:20:10.068 "reset": true, 00:20:10.068 "nvme_admin": false, 00:20:10.068 "nvme_io": false, 00:20:10.068 "nvme_io_md": false, 00:20:10.068 "write_zeroes": true, 00:20:10.068 "zcopy": true, 00:20:10.068 "get_zone_info": false, 00:20:10.068 "zone_management": false, 00:20:10.068 "zone_append": false, 00:20:10.068 "compare": false, 00:20:10.068 "compare_and_write": false, 00:20:10.068 "abort": true, 00:20:10.068 "seek_hole": false, 00:20:10.068 "seek_data": false, 00:20:10.068 "copy": true, 00:20:10.068 "nvme_iov_md": false 00:20:10.068 }, 00:20:10.068 "memory_domains": [ 00:20:10.068 { 00:20:10.068 "dma_device_id": "system", 00:20:10.068 "dma_device_type": 1 00:20:10.068 }, 00:20:10.068 { 00:20:10.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.068 "dma_device_type": 2 00:20:10.068 } 00:20:10.068 ], 00:20:10.068 "driver_specific": {} 00:20:10.068 }' 00:20:10.068 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.068 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.068 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:10.068 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.068 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.068 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:10.068 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.068 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.330 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:10.330 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.330 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.330 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:10.330 10:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:10.587 [2024-07-25 10:35:14.094795] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.587 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.845 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.845 "name": "Existed_Raid", 00:20:10.845 "uuid": "cf4110e8-27b1-4a36-bca6-a4b58199daf2", 00:20:10.845 "strip_size_kb": 0, 00:20:10.845 "state": "online", 00:20:10.845 "raid_level": "raid1", 00:20:10.845 "superblock": true, 00:20:10.845 "num_base_bdevs": 4, 00:20:10.845 "num_base_bdevs_discovered": 3, 00:20:10.845 "num_base_bdevs_operational": 3, 00:20:10.845 "base_bdevs_list": [ 00:20:10.845 { 00:20:10.845 "name": null, 00:20:10.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.845 "is_configured": false, 00:20:10.845 "data_offset": 2048, 00:20:10.845 "data_size": 63488 00:20:10.845 }, 00:20:10.845 { 00:20:10.845 "name": "BaseBdev2", 00:20:10.845 "uuid": "e369b515-f5af-4efd-9e5b-96672608e52e", 00:20:10.845 "is_configured": true, 00:20:10.845 "data_offset": 2048, 00:20:10.845 "data_size": 63488 00:20:10.845 }, 00:20:10.845 { 00:20:10.845 "name": "BaseBdev3", 00:20:10.845 "uuid": "83c34dd3-44d7-42ef-bc72-040cd4ff008d", 00:20:10.845 "is_configured": true, 00:20:10.845 "data_offset": 2048, 00:20:10.845 "data_size": 63488 00:20:10.845 }, 00:20:10.845 { 00:20:10.845 "name": "BaseBdev4", 00:20:10.845 "uuid": "86208417-513a-46d8-9e5e-1672026bc331", 00:20:10.845 "is_configured": true, 00:20:10.845 "data_offset": 2048, 00:20:10.845 "data_size": 63488 00:20:10.845 } 00:20:10.845 ] 00:20:10.845 }' 00:20:10.845 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.845 10:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.410 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:11.410 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:11.410 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.410 10:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:11.668 10:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:11.668 10:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:11.668 10:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:11.925 [2024-07-25 10:35:15.428823] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:11.925 10:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:11.925 10:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:11.926 10:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.926 10:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:12.183 10:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:12.183 10:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:12.183 10:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:12.441 [2024-07-25 10:35:15.975909] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:12.441 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:12.441 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:12.441 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.441 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:12.699 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:12.699 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:12.699 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:12.957 [2024-07-25 10:35:16.535109] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:12.957 [2024-07-25 10:35:16.535209] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:12.957 [2024-07-25 10:35:16.547012] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.958 [2024-07-25 10:35:16.547064] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.958 [2024-07-25 10:35:16.547091] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21a6cb0 name Existed_Raid, state offline 00:20:12.958 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:12.958 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:12.958 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.958 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:13.216 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:13.216 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:13.216 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:20:13.216 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:13.216 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:13.216 10:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:13.474 BaseBdev2 00:20:13.474 10:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:13.474 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:13.474 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:13.474 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:13.474 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:13.474 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:13.474 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:13.732 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:13.990 [ 00:20:13.990 { 00:20:13.990 "name": "BaseBdev2", 00:20:13.990 "aliases": [ 00:20:13.990 "57596a4c-6753-4cab-b7d7-323f17a3aad2" 00:20:13.990 ], 00:20:13.990 "product_name": "Malloc disk", 00:20:13.990 "block_size": 512, 00:20:13.990 "num_blocks": 65536, 00:20:13.990 "uuid": "57596a4c-6753-4cab-b7d7-323f17a3aad2", 00:20:13.990 "assigned_rate_limits": { 00:20:13.990 "rw_ios_per_sec": 0, 00:20:13.990 "rw_mbytes_per_sec": 0, 00:20:13.990 "r_mbytes_per_sec": 0, 00:20:13.990 "w_mbytes_per_sec": 0 00:20:13.990 }, 00:20:13.990 "claimed": false, 00:20:13.990 "zoned": false, 00:20:13.990 "supported_io_types": { 00:20:13.990 "read": true, 00:20:13.990 "write": true, 00:20:13.990 "unmap": true, 00:20:13.990 "flush": true, 00:20:13.990 "reset": true, 00:20:13.990 "nvme_admin": false, 00:20:13.990 "nvme_io": false, 00:20:13.990 "nvme_io_md": false, 00:20:13.990 "write_zeroes": true, 00:20:13.990 "zcopy": true, 00:20:13.990 "get_zone_info": false, 00:20:13.990 "zone_management": false, 00:20:13.990 "zone_append": false, 00:20:13.990 "compare": false, 00:20:13.990 "compare_and_write": false, 00:20:13.990 "abort": true, 00:20:13.990 "seek_hole": false, 00:20:13.990 "seek_data": false, 00:20:13.990 "copy": true, 00:20:13.990 "nvme_iov_md": false 00:20:13.990 }, 00:20:13.990 "memory_domains": [ 00:20:13.990 { 00:20:13.990 "dma_device_id": "system", 00:20:13.990 "dma_device_type": 1 00:20:13.990 }, 00:20:13.990 { 00:20:13.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.990 "dma_device_type": 2 00:20:13.990 } 00:20:13.990 ], 00:20:13.990 "driver_specific": {} 00:20:13.990 } 00:20:13.990 ] 00:20:13.990 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:13.990 10:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:13.990 10:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:13.990 10:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:14.250 BaseBdev3 00:20:14.250 10:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:14.250 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:14.250 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:14.250 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:14.250 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:14.251 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:14.251 10:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:14.509 10:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:14.766 [ 00:20:14.766 { 00:20:14.766 "name": "BaseBdev3", 00:20:14.766 "aliases": [ 00:20:14.766 "3efb423a-98a4-4851-911d-000d6eb5fc4b" 00:20:14.766 ], 00:20:14.766 "product_name": "Malloc disk", 00:20:14.766 "block_size": 512, 00:20:14.766 "num_blocks": 65536, 00:20:14.766 "uuid": "3efb423a-98a4-4851-911d-000d6eb5fc4b", 00:20:14.766 "assigned_rate_limits": { 00:20:14.766 "rw_ios_per_sec": 0, 00:20:14.766 "rw_mbytes_per_sec": 0, 00:20:14.766 "r_mbytes_per_sec": 0, 00:20:14.766 "w_mbytes_per_sec": 0 00:20:14.766 }, 00:20:14.766 "claimed": false, 00:20:14.766 "zoned": false, 00:20:14.766 "supported_io_types": { 00:20:14.766 "read": true, 00:20:14.766 "write": true, 00:20:14.766 "unmap": true, 00:20:14.766 "flush": true, 00:20:14.766 "reset": true, 00:20:14.766 "nvme_admin": false, 00:20:14.766 "nvme_io": false, 00:20:14.766 "nvme_io_md": false, 00:20:14.766 "write_zeroes": true, 00:20:14.766 "zcopy": true, 00:20:14.766 "get_zone_info": false, 00:20:14.766 "zone_management": false, 00:20:14.766 "zone_append": false, 00:20:14.766 "compare": false, 00:20:14.766 "compare_and_write": false, 00:20:14.766 "abort": true, 00:20:14.766 "seek_hole": false, 00:20:14.766 "seek_data": false, 00:20:14.766 "copy": true, 00:20:14.766 "nvme_iov_md": false 00:20:14.766 }, 00:20:14.766 "memory_domains": [ 00:20:14.766 { 00:20:14.766 "dma_device_id": "system", 00:20:14.766 "dma_device_type": 1 00:20:14.766 }, 00:20:14.766 { 00:20:14.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.766 "dma_device_type": 2 00:20:14.766 } 00:20:14.766 ], 00:20:14.766 "driver_specific": {} 00:20:14.766 } 00:20:14.766 ] 00:20:14.766 10:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:14.766 10:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:14.766 10:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:14.766 10:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:15.024 BaseBdev4 00:20:15.024 10:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:20:15.024 10:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:15.024 10:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:15.024 10:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:15.024 10:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:15.024 10:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:15.024 10:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:15.282 10:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:15.541 [ 00:20:15.541 { 00:20:15.541 "name": "BaseBdev4", 00:20:15.541 "aliases": [ 00:20:15.541 "17cd21c4-cb9a-4d77-8015-b8256e6d3b44" 00:20:15.541 ], 00:20:15.541 "product_name": "Malloc disk", 00:20:15.541 "block_size": 512, 00:20:15.541 "num_blocks": 65536, 00:20:15.541 "uuid": "17cd21c4-cb9a-4d77-8015-b8256e6d3b44", 00:20:15.541 "assigned_rate_limits": { 00:20:15.541 "rw_ios_per_sec": 0, 00:20:15.541 "rw_mbytes_per_sec": 0, 00:20:15.541 "r_mbytes_per_sec": 0, 00:20:15.541 "w_mbytes_per_sec": 0 00:20:15.541 }, 00:20:15.541 "claimed": false, 00:20:15.541 "zoned": false, 00:20:15.541 "supported_io_types": { 00:20:15.541 "read": true, 00:20:15.541 "write": true, 00:20:15.541 "unmap": true, 00:20:15.541 "flush": true, 00:20:15.541 "reset": true, 00:20:15.541 "nvme_admin": false, 00:20:15.541 "nvme_io": false, 00:20:15.541 "nvme_io_md": false, 00:20:15.541 "write_zeroes": true, 00:20:15.541 "zcopy": true, 00:20:15.541 "get_zone_info": false, 00:20:15.541 "zone_management": false, 00:20:15.541 "zone_append": false, 00:20:15.541 "compare": false, 00:20:15.541 "compare_and_write": false, 00:20:15.541 "abort": true, 00:20:15.541 "seek_hole": false, 00:20:15.541 "seek_data": false, 00:20:15.541 "copy": true, 00:20:15.541 "nvme_iov_md": false 00:20:15.541 }, 00:20:15.541 "memory_domains": [ 00:20:15.541 { 00:20:15.541 "dma_device_id": "system", 00:20:15.541 "dma_device_type": 1 00:20:15.541 }, 00:20:15.541 { 00:20:15.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.541 "dma_device_type": 2 00:20:15.541 } 00:20:15.541 ], 00:20:15.541 "driver_specific": {} 00:20:15.541 } 00:20:15.541 ] 00:20:15.541 10:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:15.541 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:15.541 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:15.541 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:15.799 [2024-07-25 10:35:19.353745] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:15.799 [2024-07-25 10:35:19.353785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:15.799 [2024-07-25 10:35:19.353811] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:15.799 [2024-07-25 10:35:19.355072] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:15.799 [2024-07-25 10:35:19.355138] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:15.799 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:15.799 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:15.799 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:15.799 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:15.799 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:15.799 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:15.799 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:15.800 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:15.800 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:15.800 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:15.800 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.800 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.057 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.057 "name": "Existed_Raid", 00:20:16.057 "uuid": "e488c543-50a1-4447-a3d5-811e30ef033d", 00:20:16.057 "strip_size_kb": 0, 00:20:16.057 "state": "configuring", 00:20:16.057 "raid_level": "raid1", 00:20:16.057 "superblock": true, 00:20:16.057 "num_base_bdevs": 4, 00:20:16.057 "num_base_bdevs_discovered": 3, 00:20:16.057 "num_base_bdevs_operational": 4, 00:20:16.057 "base_bdevs_list": [ 00:20:16.057 { 00:20:16.057 "name": "BaseBdev1", 00:20:16.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.057 "is_configured": false, 00:20:16.057 "data_offset": 0, 00:20:16.057 "data_size": 0 00:20:16.057 }, 00:20:16.057 { 00:20:16.057 "name": "BaseBdev2", 00:20:16.057 "uuid": "57596a4c-6753-4cab-b7d7-323f17a3aad2", 00:20:16.057 "is_configured": true, 00:20:16.057 "data_offset": 2048, 00:20:16.057 "data_size": 63488 00:20:16.057 }, 00:20:16.057 { 00:20:16.057 "name": "BaseBdev3", 00:20:16.057 "uuid": "3efb423a-98a4-4851-911d-000d6eb5fc4b", 00:20:16.057 "is_configured": true, 00:20:16.057 "data_offset": 2048, 00:20:16.057 "data_size": 63488 00:20:16.057 }, 00:20:16.057 { 00:20:16.057 "name": "BaseBdev4", 00:20:16.057 "uuid": "17cd21c4-cb9a-4d77-8015-b8256e6d3b44", 00:20:16.057 "is_configured": true, 00:20:16.057 "data_offset": 2048, 00:20:16.058 "data_size": 63488 00:20:16.058 } 00:20:16.058 ] 00:20:16.058 }' 00:20:16.058 10:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.058 10:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.624 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:16.882 [2024-07-25 10:35:20.392483] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:16.882 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:16.882 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.882 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:16.882 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:16.882 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:16.882 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:16.882 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.882 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.882 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.882 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.882 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.882 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.140 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:17.140 "name": "Existed_Raid", 00:20:17.140 "uuid": "e488c543-50a1-4447-a3d5-811e30ef033d", 00:20:17.140 "strip_size_kb": 0, 00:20:17.140 "state": "configuring", 00:20:17.140 "raid_level": "raid1", 00:20:17.140 "superblock": true, 00:20:17.140 "num_base_bdevs": 4, 00:20:17.140 "num_base_bdevs_discovered": 2, 00:20:17.140 "num_base_bdevs_operational": 4, 00:20:17.140 "base_bdevs_list": [ 00:20:17.140 { 00:20:17.140 "name": "BaseBdev1", 00:20:17.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.140 "is_configured": false, 00:20:17.140 "data_offset": 0, 00:20:17.140 "data_size": 0 00:20:17.140 }, 00:20:17.140 { 00:20:17.140 "name": null, 00:20:17.140 "uuid": "57596a4c-6753-4cab-b7d7-323f17a3aad2", 00:20:17.140 "is_configured": false, 00:20:17.140 "data_offset": 2048, 00:20:17.140 "data_size": 63488 00:20:17.140 }, 00:20:17.140 { 00:20:17.140 "name": "BaseBdev3", 00:20:17.140 "uuid": "3efb423a-98a4-4851-911d-000d6eb5fc4b", 00:20:17.140 "is_configured": true, 00:20:17.140 "data_offset": 2048, 00:20:17.140 "data_size": 63488 00:20:17.140 }, 00:20:17.140 { 00:20:17.140 "name": "BaseBdev4", 00:20:17.140 "uuid": "17cd21c4-cb9a-4d77-8015-b8256e6d3b44", 00:20:17.140 "is_configured": true, 00:20:17.140 "data_offset": 2048, 00:20:17.140 "data_size": 63488 00:20:17.140 } 00:20:17.140 ] 00:20:17.140 }' 00:20:17.140 10:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:17.140 10:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.706 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.706 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:17.964 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:17.964 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:18.222 [2024-07-25 10:35:21.716762] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:18.222 BaseBdev1 00:20:18.222 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:18.222 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:18.222 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:18.223 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:18.223 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:18.223 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:18.223 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:18.481 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:18.739 [ 00:20:18.739 { 00:20:18.739 "name": "BaseBdev1", 00:20:18.739 "aliases": [ 00:20:18.739 "ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a" 00:20:18.739 ], 00:20:18.739 "product_name": "Malloc disk", 00:20:18.739 "block_size": 512, 00:20:18.739 "num_blocks": 65536, 00:20:18.739 "uuid": "ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a", 00:20:18.739 "assigned_rate_limits": { 00:20:18.739 "rw_ios_per_sec": 0, 00:20:18.739 "rw_mbytes_per_sec": 0, 00:20:18.739 "r_mbytes_per_sec": 0, 00:20:18.739 "w_mbytes_per_sec": 0 00:20:18.739 }, 00:20:18.739 "claimed": true, 00:20:18.739 "claim_type": "exclusive_write", 00:20:18.739 "zoned": false, 00:20:18.739 "supported_io_types": { 00:20:18.739 "read": true, 00:20:18.739 "write": true, 00:20:18.739 "unmap": true, 00:20:18.739 "flush": true, 00:20:18.739 "reset": true, 00:20:18.739 "nvme_admin": false, 00:20:18.739 "nvme_io": false, 00:20:18.739 "nvme_io_md": false, 00:20:18.739 "write_zeroes": true, 00:20:18.739 "zcopy": true, 00:20:18.739 "get_zone_info": false, 00:20:18.739 "zone_management": false, 00:20:18.739 "zone_append": false, 00:20:18.739 "compare": false, 00:20:18.739 "compare_and_write": false, 00:20:18.739 "abort": true, 00:20:18.739 "seek_hole": false, 00:20:18.739 "seek_data": false, 00:20:18.739 "copy": true, 00:20:18.739 "nvme_iov_md": false 00:20:18.739 }, 00:20:18.739 "memory_domains": [ 00:20:18.739 { 00:20:18.739 "dma_device_id": "system", 00:20:18.739 "dma_device_type": 1 00:20:18.739 }, 00:20:18.739 { 00:20:18.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.739 "dma_device_type": 2 00:20:18.739 } 00:20:18.739 ], 00:20:18.739 "driver_specific": {} 00:20:18.739 } 00:20:18.739 ] 00:20:18.739 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:18.739 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:18.739 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:18.739 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:18.739 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:18.739 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:18.739 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:18.739 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.739 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.739 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.739 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.739 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.739 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.998 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.998 "name": "Existed_Raid", 00:20:18.998 "uuid": "e488c543-50a1-4447-a3d5-811e30ef033d", 00:20:18.998 "strip_size_kb": 0, 00:20:18.998 "state": "configuring", 00:20:18.998 "raid_level": "raid1", 00:20:18.998 "superblock": true, 00:20:18.998 "num_base_bdevs": 4, 00:20:18.998 "num_base_bdevs_discovered": 3, 00:20:18.998 "num_base_bdevs_operational": 4, 00:20:18.998 "base_bdevs_list": [ 00:20:18.998 { 00:20:18.998 "name": "BaseBdev1", 00:20:18.998 "uuid": "ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a", 00:20:18.998 "is_configured": true, 00:20:18.998 "data_offset": 2048, 00:20:18.998 "data_size": 63488 00:20:18.998 }, 00:20:18.998 { 00:20:18.998 "name": null, 00:20:18.998 "uuid": "57596a4c-6753-4cab-b7d7-323f17a3aad2", 00:20:18.998 "is_configured": false, 00:20:18.998 "data_offset": 2048, 00:20:18.998 "data_size": 63488 00:20:18.998 }, 00:20:18.998 { 00:20:18.998 "name": "BaseBdev3", 00:20:18.998 "uuid": "3efb423a-98a4-4851-911d-000d6eb5fc4b", 00:20:18.998 "is_configured": true, 00:20:18.998 "data_offset": 2048, 00:20:18.998 "data_size": 63488 00:20:18.998 }, 00:20:18.998 { 00:20:18.998 "name": "BaseBdev4", 00:20:18.998 "uuid": "17cd21c4-cb9a-4d77-8015-b8256e6d3b44", 00:20:18.998 "is_configured": true, 00:20:18.998 "data_offset": 2048, 00:20:18.998 "data_size": 63488 00:20:18.998 } 00:20:18.998 ] 00:20:18.998 }' 00:20:18.998 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.998 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.564 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.564 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:19.822 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:19.822 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:19.822 [2024-07-25 10:35:23.513597] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:20.080 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:20.080 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:20.080 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:20.080 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:20.080 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:20.080 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:20.080 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.080 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.080 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.080 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.080 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.080 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.080 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.080 "name": "Existed_Raid", 00:20:20.080 "uuid": "e488c543-50a1-4447-a3d5-811e30ef033d", 00:20:20.080 "strip_size_kb": 0, 00:20:20.080 "state": "configuring", 00:20:20.080 "raid_level": "raid1", 00:20:20.080 "superblock": true, 00:20:20.080 "num_base_bdevs": 4, 00:20:20.080 "num_base_bdevs_discovered": 2, 00:20:20.080 "num_base_bdevs_operational": 4, 00:20:20.080 "base_bdevs_list": [ 00:20:20.080 { 00:20:20.080 "name": "BaseBdev1", 00:20:20.080 "uuid": "ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a", 00:20:20.080 "is_configured": true, 00:20:20.080 "data_offset": 2048, 00:20:20.080 "data_size": 63488 00:20:20.080 }, 00:20:20.080 { 00:20:20.080 "name": null, 00:20:20.080 "uuid": "57596a4c-6753-4cab-b7d7-323f17a3aad2", 00:20:20.080 "is_configured": false, 00:20:20.080 "data_offset": 2048, 00:20:20.080 "data_size": 63488 00:20:20.080 }, 00:20:20.080 { 00:20:20.080 "name": null, 00:20:20.080 "uuid": "3efb423a-98a4-4851-911d-000d6eb5fc4b", 00:20:20.080 "is_configured": false, 00:20:20.080 "data_offset": 2048, 00:20:20.080 "data_size": 63488 00:20:20.080 }, 00:20:20.080 { 00:20:20.080 "name": "BaseBdev4", 00:20:20.080 "uuid": "17cd21c4-cb9a-4d77-8015-b8256e6d3b44", 00:20:20.080 "is_configured": true, 00:20:20.080 "data_offset": 2048, 00:20:20.080 "data_size": 63488 00:20:20.081 } 00:20:20.081 ] 00:20:20.081 }' 00:20:20.081 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.081 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.646 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.646 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:20.904 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:20.904 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:21.162 [2024-07-25 10:35:24.813192] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:21.162 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:21.162 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:21.162 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:21.162 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:21.162 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:21.162 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:21.162 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:21.162 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:21.162 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:21.162 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:21.162 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.162 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.420 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:21.420 "name": "Existed_Raid", 00:20:21.420 "uuid": "e488c543-50a1-4447-a3d5-811e30ef033d", 00:20:21.420 "strip_size_kb": 0, 00:20:21.420 "state": "configuring", 00:20:21.420 "raid_level": "raid1", 00:20:21.420 "superblock": true, 00:20:21.420 "num_base_bdevs": 4, 00:20:21.420 "num_base_bdevs_discovered": 3, 00:20:21.420 "num_base_bdevs_operational": 4, 00:20:21.420 "base_bdevs_list": [ 00:20:21.420 { 00:20:21.420 "name": "BaseBdev1", 00:20:21.420 "uuid": "ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a", 00:20:21.420 "is_configured": true, 00:20:21.420 "data_offset": 2048, 00:20:21.420 "data_size": 63488 00:20:21.420 }, 00:20:21.420 { 00:20:21.420 "name": null, 00:20:21.420 "uuid": "57596a4c-6753-4cab-b7d7-323f17a3aad2", 00:20:21.420 "is_configured": false, 00:20:21.420 "data_offset": 2048, 00:20:21.420 "data_size": 63488 00:20:21.420 }, 00:20:21.420 { 00:20:21.420 "name": "BaseBdev3", 00:20:21.420 "uuid": "3efb423a-98a4-4851-911d-000d6eb5fc4b", 00:20:21.420 "is_configured": true, 00:20:21.420 "data_offset": 2048, 00:20:21.420 "data_size": 63488 00:20:21.420 }, 00:20:21.420 { 00:20:21.420 "name": "BaseBdev4", 00:20:21.420 "uuid": "17cd21c4-cb9a-4d77-8015-b8256e6d3b44", 00:20:21.420 "is_configured": true, 00:20:21.420 "data_offset": 2048, 00:20:21.420 "data_size": 63488 00:20:21.420 } 00:20:21.420 ] 00:20:21.420 }' 00:20:21.420 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:21.420 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.986 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.986 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:22.244 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:22.244 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:22.503 [2024-07-25 10:35:26.152792] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:22.503 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:22.503 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:22.503 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:22.503 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:22.503 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:22.503 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:22.503 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:22.503 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:22.503 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:22.503 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:22.503 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.503 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.761 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:22.761 "name": "Existed_Raid", 00:20:22.762 "uuid": "e488c543-50a1-4447-a3d5-811e30ef033d", 00:20:22.762 "strip_size_kb": 0, 00:20:22.762 "state": "configuring", 00:20:22.762 "raid_level": "raid1", 00:20:22.762 "superblock": true, 00:20:22.762 "num_base_bdevs": 4, 00:20:22.762 "num_base_bdevs_discovered": 2, 00:20:22.762 "num_base_bdevs_operational": 4, 00:20:22.762 "base_bdevs_list": [ 00:20:22.762 { 00:20:22.762 "name": null, 00:20:22.762 "uuid": "ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a", 00:20:22.762 "is_configured": false, 00:20:22.762 "data_offset": 2048, 00:20:22.762 "data_size": 63488 00:20:22.762 }, 00:20:22.762 { 00:20:22.762 "name": null, 00:20:22.762 "uuid": "57596a4c-6753-4cab-b7d7-323f17a3aad2", 00:20:22.762 "is_configured": false, 00:20:22.762 "data_offset": 2048, 00:20:22.762 "data_size": 63488 00:20:22.762 }, 00:20:22.762 { 00:20:22.762 "name": "BaseBdev3", 00:20:22.762 "uuid": "3efb423a-98a4-4851-911d-000d6eb5fc4b", 00:20:22.762 "is_configured": true, 00:20:22.762 "data_offset": 2048, 00:20:22.762 "data_size": 63488 00:20:22.762 }, 00:20:22.762 { 00:20:22.762 "name": "BaseBdev4", 00:20:22.762 "uuid": "17cd21c4-cb9a-4d77-8015-b8256e6d3b44", 00:20:22.762 "is_configured": true, 00:20:22.762 "data_offset": 2048, 00:20:22.762 "data_size": 63488 00:20:22.762 } 00:20:22.762 ] 00:20:22.762 }' 00:20:22.762 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:22.762 10:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.327 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.327 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:23.585 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:23.585 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:23.843 [2024-07-25 10:35:27.535199] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.843 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:23.843 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:23.843 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:23.843 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:23.843 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:23.843 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:23.843 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:23.843 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:23.843 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:23.843 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.101 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.101 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.101 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.101 "name": "Existed_Raid", 00:20:24.101 "uuid": "e488c543-50a1-4447-a3d5-811e30ef033d", 00:20:24.101 "strip_size_kb": 0, 00:20:24.101 "state": "configuring", 00:20:24.101 "raid_level": "raid1", 00:20:24.101 "superblock": true, 00:20:24.101 "num_base_bdevs": 4, 00:20:24.101 "num_base_bdevs_discovered": 3, 00:20:24.101 "num_base_bdevs_operational": 4, 00:20:24.101 "base_bdevs_list": [ 00:20:24.101 { 00:20:24.101 "name": null, 00:20:24.101 "uuid": "ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a", 00:20:24.101 "is_configured": false, 00:20:24.101 "data_offset": 2048, 00:20:24.101 "data_size": 63488 00:20:24.101 }, 00:20:24.101 { 00:20:24.101 "name": "BaseBdev2", 00:20:24.101 "uuid": "57596a4c-6753-4cab-b7d7-323f17a3aad2", 00:20:24.101 "is_configured": true, 00:20:24.101 "data_offset": 2048, 00:20:24.101 "data_size": 63488 00:20:24.101 }, 00:20:24.101 { 00:20:24.101 "name": "BaseBdev3", 00:20:24.101 "uuid": "3efb423a-98a4-4851-911d-000d6eb5fc4b", 00:20:24.101 "is_configured": true, 00:20:24.101 "data_offset": 2048, 00:20:24.101 "data_size": 63488 00:20:24.101 }, 00:20:24.101 { 00:20:24.101 "name": "BaseBdev4", 00:20:24.101 "uuid": "17cd21c4-cb9a-4d77-8015-b8256e6d3b44", 00:20:24.101 "is_configured": true, 00:20:24.101 "data_offset": 2048, 00:20:24.101 "data_size": 63488 00:20:24.101 } 00:20:24.101 ] 00:20:24.101 }' 00:20:24.101 10:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.101 10:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.035 10:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.035 10:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:25.035 10:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:25.036 10:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.036 10:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:25.294 10:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a 00:20:25.553 [2024-07-25 10:35:29.154206] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:25.553 [2024-07-25 10:35:29.154436] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x234cc30 00:20:25.553 [2024-07-25 10:35:29.154456] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:25.553 [2024-07-25 10:35:29.154641] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x234d410 00:20:25.553 [2024-07-25 10:35:29.154793] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x234cc30 00:20:25.553 [2024-07-25 10:35:29.154810] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x234cc30 00:20:25.553 [2024-07-25 10:35:29.154921] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.553 NewBaseBdev 00:20:25.553 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:25.553 10:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:25.553 10:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:25.553 10:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:25.553 10:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:25.553 10:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:25.553 10:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:25.811 10:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:26.069 [ 00:20:26.069 { 00:20:26.069 "name": "NewBaseBdev", 00:20:26.069 "aliases": [ 00:20:26.069 "ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a" 00:20:26.069 ], 00:20:26.069 "product_name": "Malloc disk", 00:20:26.069 "block_size": 512, 00:20:26.069 "num_blocks": 65536, 00:20:26.069 "uuid": "ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a", 00:20:26.069 "assigned_rate_limits": { 00:20:26.069 "rw_ios_per_sec": 0, 00:20:26.069 "rw_mbytes_per_sec": 0, 00:20:26.069 "r_mbytes_per_sec": 0, 00:20:26.069 "w_mbytes_per_sec": 0 00:20:26.069 }, 00:20:26.069 "claimed": true, 00:20:26.069 "claim_type": "exclusive_write", 00:20:26.069 "zoned": false, 00:20:26.069 "supported_io_types": { 00:20:26.069 "read": true, 00:20:26.069 "write": true, 00:20:26.069 "unmap": true, 00:20:26.069 "flush": true, 00:20:26.069 "reset": true, 00:20:26.069 "nvme_admin": false, 00:20:26.069 "nvme_io": false, 00:20:26.069 "nvme_io_md": false, 00:20:26.069 "write_zeroes": true, 00:20:26.069 "zcopy": true, 00:20:26.069 "get_zone_info": false, 00:20:26.069 "zone_management": false, 00:20:26.069 "zone_append": false, 00:20:26.069 "compare": false, 00:20:26.069 "compare_and_write": false, 00:20:26.069 "abort": true, 00:20:26.069 "seek_hole": false, 00:20:26.069 "seek_data": false, 00:20:26.069 "copy": true, 00:20:26.069 "nvme_iov_md": false 00:20:26.069 }, 00:20:26.069 "memory_domains": [ 00:20:26.069 { 00:20:26.069 "dma_device_id": "system", 00:20:26.069 "dma_device_type": 1 00:20:26.069 }, 00:20:26.069 { 00:20:26.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.069 "dma_device_type": 2 00:20:26.069 } 00:20:26.069 ], 00:20:26.069 "driver_specific": {} 00:20:26.069 } 00:20:26.069 ] 00:20:26.069 10:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:26.069 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:26.069 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:26.069 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:26.069 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:26.069 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:26.069 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:26.069 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:26.069 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:26.069 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:26.069 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:26.069 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.069 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.387 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.387 "name": "Existed_Raid", 00:20:26.387 "uuid": "e488c543-50a1-4447-a3d5-811e30ef033d", 00:20:26.387 "strip_size_kb": 0, 00:20:26.387 "state": "online", 00:20:26.387 "raid_level": "raid1", 00:20:26.387 "superblock": true, 00:20:26.387 "num_base_bdevs": 4, 00:20:26.387 "num_base_bdevs_discovered": 4, 00:20:26.387 "num_base_bdevs_operational": 4, 00:20:26.387 "base_bdevs_list": [ 00:20:26.387 { 00:20:26.387 "name": "NewBaseBdev", 00:20:26.387 "uuid": "ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a", 00:20:26.387 "is_configured": true, 00:20:26.387 "data_offset": 2048, 00:20:26.387 "data_size": 63488 00:20:26.387 }, 00:20:26.387 { 00:20:26.387 "name": "BaseBdev2", 00:20:26.387 "uuid": "57596a4c-6753-4cab-b7d7-323f17a3aad2", 00:20:26.387 "is_configured": true, 00:20:26.387 "data_offset": 2048, 00:20:26.387 "data_size": 63488 00:20:26.387 }, 00:20:26.387 { 00:20:26.387 "name": "BaseBdev3", 00:20:26.387 "uuid": "3efb423a-98a4-4851-911d-000d6eb5fc4b", 00:20:26.387 "is_configured": true, 00:20:26.387 "data_offset": 2048, 00:20:26.387 "data_size": 63488 00:20:26.387 }, 00:20:26.387 { 00:20:26.387 "name": "BaseBdev4", 00:20:26.387 "uuid": "17cd21c4-cb9a-4d77-8015-b8256e6d3b44", 00:20:26.387 "is_configured": true, 00:20:26.387 "data_offset": 2048, 00:20:26.387 "data_size": 63488 00:20:26.387 } 00:20:26.387 ] 00:20:26.387 }' 00:20:26.387 10:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.387 10:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.956 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:26.956 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:26.956 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:26.956 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:26.956 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:26.956 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:26.956 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:26.956 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:27.214 [2024-07-25 10:35:30.738757] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:27.214 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:27.214 "name": "Existed_Raid", 00:20:27.214 "aliases": [ 00:20:27.214 "e488c543-50a1-4447-a3d5-811e30ef033d" 00:20:27.214 ], 00:20:27.214 "product_name": "Raid Volume", 00:20:27.214 "block_size": 512, 00:20:27.214 "num_blocks": 63488, 00:20:27.214 "uuid": "e488c543-50a1-4447-a3d5-811e30ef033d", 00:20:27.214 "assigned_rate_limits": { 00:20:27.214 "rw_ios_per_sec": 0, 00:20:27.214 "rw_mbytes_per_sec": 0, 00:20:27.214 "r_mbytes_per_sec": 0, 00:20:27.214 "w_mbytes_per_sec": 0 00:20:27.214 }, 00:20:27.214 "claimed": false, 00:20:27.214 "zoned": false, 00:20:27.214 "supported_io_types": { 00:20:27.214 "read": true, 00:20:27.214 "write": true, 00:20:27.214 "unmap": false, 00:20:27.214 "flush": false, 00:20:27.214 "reset": true, 00:20:27.214 "nvme_admin": false, 00:20:27.214 "nvme_io": false, 00:20:27.214 "nvme_io_md": false, 00:20:27.214 "write_zeroes": true, 00:20:27.214 "zcopy": false, 00:20:27.214 "get_zone_info": false, 00:20:27.214 "zone_management": false, 00:20:27.214 "zone_append": false, 00:20:27.214 "compare": false, 00:20:27.214 "compare_and_write": false, 00:20:27.214 "abort": false, 00:20:27.214 "seek_hole": false, 00:20:27.214 "seek_data": false, 00:20:27.214 "copy": false, 00:20:27.214 "nvme_iov_md": false 00:20:27.214 }, 00:20:27.214 "memory_domains": [ 00:20:27.214 { 00:20:27.214 "dma_device_id": "system", 00:20:27.214 "dma_device_type": 1 00:20:27.214 }, 00:20:27.214 { 00:20:27.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.214 "dma_device_type": 2 00:20:27.214 }, 00:20:27.214 { 00:20:27.214 "dma_device_id": "system", 00:20:27.214 "dma_device_type": 1 00:20:27.214 }, 00:20:27.214 { 00:20:27.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.215 "dma_device_type": 2 00:20:27.215 }, 00:20:27.215 { 00:20:27.215 "dma_device_id": "system", 00:20:27.215 "dma_device_type": 1 00:20:27.215 }, 00:20:27.215 { 00:20:27.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.215 "dma_device_type": 2 00:20:27.215 }, 00:20:27.215 { 00:20:27.215 "dma_device_id": "system", 00:20:27.215 "dma_device_type": 1 00:20:27.215 }, 00:20:27.215 { 00:20:27.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.215 "dma_device_type": 2 00:20:27.215 } 00:20:27.215 ], 00:20:27.215 "driver_specific": { 00:20:27.215 "raid": { 00:20:27.215 "uuid": "e488c543-50a1-4447-a3d5-811e30ef033d", 00:20:27.215 "strip_size_kb": 0, 00:20:27.215 "state": "online", 00:20:27.215 "raid_level": "raid1", 00:20:27.215 "superblock": true, 00:20:27.215 "num_base_bdevs": 4, 00:20:27.215 "num_base_bdevs_discovered": 4, 00:20:27.215 "num_base_bdevs_operational": 4, 00:20:27.215 "base_bdevs_list": [ 00:20:27.215 { 00:20:27.215 "name": "NewBaseBdev", 00:20:27.215 "uuid": "ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a", 00:20:27.215 "is_configured": true, 00:20:27.215 "data_offset": 2048, 00:20:27.215 "data_size": 63488 00:20:27.215 }, 00:20:27.215 { 00:20:27.215 "name": "BaseBdev2", 00:20:27.215 "uuid": "57596a4c-6753-4cab-b7d7-323f17a3aad2", 00:20:27.215 "is_configured": true, 00:20:27.215 "data_offset": 2048, 00:20:27.215 "data_size": 63488 00:20:27.215 }, 00:20:27.215 { 00:20:27.215 "name": "BaseBdev3", 00:20:27.215 "uuid": "3efb423a-98a4-4851-911d-000d6eb5fc4b", 00:20:27.215 "is_configured": true, 00:20:27.215 "data_offset": 2048, 00:20:27.215 "data_size": 63488 00:20:27.215 }, 00:20:27.215 { 00:20:27.215 "name": "BaseBdev4", 00:20:27.215 "uuid": "17cd21c4-cb9a-4d77-8015-b8256e6d3b44", 00:20:27.215 "is_configured": true, 00:20:27.215 "data_offset": 2048, 00:20:27.215 "data_size": 63488 00:20:27.215 } 00:20:27.215 ] 00:20:27.215 } 00:20:27.215 } 00:20:27.215 }' 00:20:27.215 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:27.215 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:27.215 BaseBdev2 00:20:27.215 BaseBdev3 00:20:27.215 BaseBdev4' 00:20:27.215 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:27.215 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:27.215 10:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:27.472 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:27.472 "name": "NewBaseBdev", 00:20:27.472 "aliases": [ 00:20:27.472 "ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a" 00:20:27.472 ], 00:20:27.472 "product_name": "Malloc disk", 00:20:27.472 "block_size": 512, 00:20:27.472 "num_blocks": 65536, 00:20:27.472 "uuid": "ff85d345-a3a7-4c1a-8c67-6cb3e3ed9a1a", 00:20:27.472 "assigned_rate_limits": { 00:20:27.472 "rw_ios_per_sec": 0, 00:20:27.472 "rw_mbytes_per_sec": 0, 00:20:27.472 "r_mbytes_per_sec": 0, 00:20:27.472 "w_mbytes_per_sec": 0 00:20:27.472 }, 00:20:27.472 "claimed": true, 00:20:27.472 "claim_type": "exclusive_write", 00:20:27.472 "zoned": false, 00:20:27.472 "supported_io_types": { 00:20:27.472 "read": true, 00:20:27.472 "write": true, 00:20:27.472 "unmap": true, 00:20:27.472 "flush": true, 00:20:27.472 "reset": true, 00:20:27.472 "nvme_admin": false, 00:20:27.472 "nvme_io": false, 00:20:27.472 "nvme_io_md": false, 00:20:27.472 "write_zeroes": true, 00:20:27.472 "zcopy": true, 00:20:27.472 "get_zone_info": false, 00:20:27.472 "zone_management": false, 00:20:27.472 "zone_append": false, 00:20:27.472 "compare": false, 00:20:27.472 "compare_and_write": false, 00:20:27.472 "abort": true, 00:20:27.472 "seek_hole": false, 00:20:27.472 "seek_data": false, 00:20:27.472 "copy": true, 00:20:27.472 "nvme_iov_md": false 00:20:27.472 }, 00:20:27.472 "memory_domains": [ 00:20:27.472 { 00:20:27.472 "dma_device_id": "system", 00:20:27.472 "dma_device_type": 1 00:20:27.472 }, 00:20:27.472 { 00:20:27.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.473 "dma_device_type": 2 00:20:27.473 } 00:20:27.473 ], 00:20:27.473 "driver_specific": {} 00:20:27.473 }' 00:20:27.473 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.473 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.473 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:27.473 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.473 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.730 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:27.730 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.730 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.730 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:27.730 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.730 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.730 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:27.730 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:27.730 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:27.730 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:27.987 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:27.987 "name": "BaseBdev2", 00:20:27.987 "aliases": [ 00:20:27.987 "57596a4c-6753-4cab-b7d7-323f17a3aad2" 00:20:27.987 ], 00:20:27.987 "product_name": "Malloc disk", 00:20:27.987 "block_size": 512, 00:20:27.987 "num_blocks": 65536, 00:20:27.987 "uuid": "57596a4c-6753-4cab-b7d7-323f17a3aad2", 00:20:27.987 "assigned_rate_limits": { 00:20:27.987 "rw_ios_per_sec": 0, 00:20:27.987 "rw_mbytes_per_sec": 0, 00:20:27.987 "r_mbytes_per_sec": 0, 00:20:27.987 "w_mbytes_per_sec": 0 00:20:27.987 }, 00:20:27.987 "claimed": true, 00:20:27.987 "claim_type": "exclusive_write", 00:20:27.987 "zoned": false, 00:20:27.987 "supported_io_types": { 00:20:27.987 "read": true, 00:20:27.987 "write": true, 00:20:27.987 "unmap": true, 00:20:27.987 "flush": true, 00:20:27.987 "reset": true, 00:20:27.987 "nvme_admin": false, 00:20:27.987 "nvme_io": false, 00:20:27.987 "nvme_io_md": false, 00:20:27.987 "write_zeroes": true, 00:20:27.987 "zcopy": true, 00:20:27.987 "get_zone_info": false, 00:20:27.987 "zone_management": false, 00:20:27.987 "zone_append": false, 00:20:27.987 "compare": false, 00:20:27.987 "compare_and_write": false, 00:20:27.987 "abort": true, 00:20:27.987 "seek_hole": false, 00:20:27.987 "seek_data": false, 00:20:27.987 "copy": true, 00:20:27.987 "nvme_iov_md": false 00:20:27.987 }, 00:20:27.987 "memory_domains": [ 00:20:27.987 { 00:20:27.987 "dma_device_id": "system", 00:20:27.987 "dma_device_type": 1 00:20:27.987 }, 00:20:27.987 { 00:20:27.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.987 "dma_device_type": 2 00:20:27.987 } 00:20:27.987 ], 00:20:27.987 "driver_specific": {} 00:20:27.987 }' 00:20:27.987 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.987 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.987 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:27.987 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.245 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.245 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:28.245 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.245 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.245 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:28.245 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.245 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.245 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:28.245 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:28.245 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:28.245 10:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:28.504 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:28.504 "name": "BaseBdev3", 00:20:28.504 "aliases": [ 00:20:28.504 "3efb423a-98a4-4851-911d-000d6eb5fc4b" 00:20:28.504 ], 00:20:28.504 "product_name": "Malloc disk", 00:20:28.504 "block_size": 512, 00:20:28.504 "num_blocks": 65536, 00:20:28.504 "uuid": "3efb423a-98a4-4851-911d-000d6eb5fc4b", 00:20:28.504 "assigned_rate_limits": { 00:20:28.504 "rw_ios_per_sec": 0, 00:20:28.504 "rw_mbytes_per_sec": 0, 00:20:28.504 "r_mbytes_per_sec": 0, 00:20:28.504 "w_mbytes_per_sec": 0 00:20:28.504 }, 00:20:28.504 "claimed": true, 00:20:28.504 "claim_type": "exclusive_write", 00:20:28.504 "zoned": false, 00:20:28.504 "supported_io_types": { 00:20:28.504 "read": true, 00:20:28.504 "write": true, 00:20:28.504 "unmap": true, 00:20:28.504 "flush": true, 00:20:28.504 "reset": true, 00:20:28.504 "nvme_admin": false, 00:20:28.504 "nvme_io": false, 00:20:28.504 "nvme_io_md": false, 00:20:28.504 "write_zeroes": true, 00:20:28.504 "zcopy": true, 00:20:28.504 "get_zone_info": false, 00:20:28.504 "zone_management": false, 00:20:28.504 "zone_append": false, 00:20:28.504 "compare": false, 00:20:28.504 "compare_and_write": false, 00:20:28.504 "abort": true, 00:20:28.504 "seek_hole": false, 00:20:28.504 "seek_data": false, 00:20:28.504 "copy": true, 00:20:28.504 "nvme_iov_md": false 00:20:28.504 }, 00:20:28.504 "memory_domains": [ 00:20:28.504 { 00:20:28.504 "dma_device_id": "system", 00:20:28.504 "dma_device_type": 1 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.504 "dma_device_type": 2 00:20:28.504 } 00:20:28.504 ], 00:20:28.504 "driver_specific": {} 00:20:28.504 }' 00:20:28.504 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.504 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.762 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:28.762 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.762 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.762 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:28.762 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.762 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.762 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:28.762 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.762 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.762 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:28.762 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:28.762 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:28.762 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:29.020 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:29.020 "name": "BaseBdev4", 00:20:29.020 "aliases": [ 00:20:29.020 "17cd21c4-cb9a-4d77-8015-b8256e6d3b44" 00:20:29.020 ], 00:20:29.020 "product_name": "Malloc disk", 00:20:29.020 "block_size": 512, 00:20:29.020 "num_blocks": 65536, 00:20:29.020 "uuid": "17cd21c4-cb9a-4d77-8015-b8256e6d3b44", 00:20:29.020 "assigned_rate_limits": { 00:20:29.020 "rw_ios_per_sec": 0, 00:20:29.020 "rw_mbytes_per_sec": 0, 00:20:29.020 "r_mbytes_per_sec": 0, 00:20:29.020 "w_mbytes_per_sec": 0 00:20:29.020 }, 00:20:29.020 "claimed": true, 00:20:29.020 "claim_type": "exclusive_write", 00:20:29.020 "zoned": false, 00:20:29.020 "supported_io_types": { 00:20:29.020 "read": true, 00:20:29.020 "write": true, 00:20:29.020 "unmap": true, 00:20:29.020 "flush": true, 00:20:29.020 "reset": true, 00:20:29.020 "nvme_admin": false, 00:20:29.020 "nvme_io": false, 00:20:29.020 "nvme_io_md": false, 00:20:29.020 "write_zeroes": true, 00:20:29.020 "zcopy": true, 00:20:29.020 "get_zone_info": false, 00:20:29.020 "zone_management": false, 00:20:29.020 "zone_append": false, 00:20:29.020 "compare": false, 00:20:29.020 "compare_and_write": false, 00:20:29.020 "abort": true, 00:20:29.020 "seek_hole": false, 00:20:29.020 "seek_data": false, 00:20:29.020 "copy": true, 00:20:29.020 "nvme_iov_md": false 00:20:29.020 }, 00:20:29.020 "memory_domains": [ 00:20:29.020 { 00:20:29.020 "dma_device_id": "system", 00:20:29.020 "dma_device_type": 1 00:20:29.020 }, 00:20:29.020 { 00:20:29.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.021 "dma_device_type": 2 00:20:29.021 } 00:20:29.021 ], 00:20:29.021 "driver_specific": {} 00:20:29.021 }' 00:20:29.021 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:29.278 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:29.279 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:29.279 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:29.279 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:29.279 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:29.279 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:29.279 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:29.279 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:29.279 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:29.279 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:29.279 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:29.279 10:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:29.537 [2024-07-25 10:35:33.221045] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:29.537 [2024-07-25 10:35:33.221072] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:29.537 [2024-07-25 10:35:33.221181] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:29.537 [2024-07-25 10:35:33.221447] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:29.537 [2024-07-25 10:35:33.221461] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x234cc30 name Existed_Raid, state offline 00:20:29.537 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2420164 00:20:29.537 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2420164 ']' 00:20:29.537 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2420164 00:20:29.537 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:20:29.537 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:29.795 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2420164 00:20:29.795 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:29.795 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:29.795 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2420164' 00:20:29.795 killing process with pid 2420164 00:20:29.795 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2420164 00:20:29.795 [2024-07-25 10:35:33.269685] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:29.795 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2420164 00:20:29.795 [2024-07-25 10:35:33.318639] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:30.052 10:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:30.052 00:20:30.052 real 0m31.953s 00:20:30.052 user 0m59.903s 00:20:30.052 sys 0m4.443s 00:20:30.052 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.052 10:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.052 ************************************ 00:20:30.052 END TEST raid_state_function_test_sb 00:20:30.052 ************************************ 00:20:30.052 10:35:33 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:20:30.052 10:35:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:30.052 10:35:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.052 10:35:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.052 ************************************ 00:20:30.052 START TEST raid_superblock_test 00:20:30.052 ************************************ 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2424609 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2424609 /var/tmp/spdk-raid.sock 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2424609 ']' 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:30.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:30.052 10:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.052 [2024-07-25 10:35:33.688706] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:30.052 [2024-07-25 10:35:33.688787] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2424609 ] 00:20:30.309 [2024-07-25 10:35:33.772487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.309 [2024-07-25 10:35:33.891786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.309 [2024-07-25 10:35:33.965464] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:30.309 [2024-07-25 10:35:33.965512] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:31.240 10:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:31.240 10:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:20:31.240 10:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:20:31.240 10:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:31.240 10:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:20:31.240 10:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:20:31.240 10:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:31.240 10:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:31.240 10:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:31.240 10:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:31.240 10:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:31.240 malloc1 00:20:31.241 10:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:31.498 [2024-07-25 10:35:35.159018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:31.498 [2024-07-25 10:35:35.159095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.498 [2024-07-25 10:35:35.159145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b82b0 00:20:31.498 [2024-07-25 10:35:35.159162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.498 [2024-07-25 10:35:35.160848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.498 [2024-07-25 10:35:35.160871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:31.498 pt1 00:20:31.498 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:31.498 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:31.498 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:20:31.498 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:20:31.498 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:31.498 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:31.498 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:31.498 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:31.498 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:32.064 malloc2 00:20:32.064 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:32.064 [2024-07-25 10:35:35.711821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:32.064 [2024-07-25 10:35:35.711879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.064 [2024-07-25 10:35:35.711900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x166b1e0 00:20:32.064 [2024-07-25 10:35:35.711915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.064 [2024-07-25 10:35:35.713303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.064 [2024-07-25 10:35:35.713331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:32.064 pt2 00:20:32.064 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:32.064 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:32.064 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:20:32.064 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:20:32.064 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:32.064 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:32.064 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:32.064 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:32.064 10:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:32.322 malloc3 00:20:32.580 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:32.838 [2024-07-25 10:35:36.316932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:32.838 [2024-07-25 10:35:36.316984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.838 [2024-07-25 10:35:36.317005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16514d0 00:20:32.838 [2024-07-25 10:35:36.317021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.838 [2024-07-25 10:35:36.318415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.838 [2024-07-25 10:35:36.318443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:32.838 pt3 00:20:32.838 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:32.838 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:32.838 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:20:32.838 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:20:32.838 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:32.838 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:32.838 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:32.838 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:32.838 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:33.096 malloc4 00:20:33.096 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:33.355 [2024-07-25 10:35:36.837110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:33.355 [2024-07-25 10:35:36.837172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.355 [2024-07-25 10:35:36.837195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14afe30 00:20:33.355 [2024-07-25 10:35:36.837211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.355 [2024-07-25 10:35:36.838815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.355 [2024-07-25 10:35:36.838843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:33.355 pt4 00:20:33.355 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:33.355 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:33.355 10:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:33.612 [2024-07-25 10:35:37.121913] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:33.612 [2024-07-25 10:35:37.123393] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:33.612 [2024-07-25 10:35:37.123461] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:33.612 [2024-07-25 10:35:37.123519] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:33.612 [2024-07-25 10:35:37.123762] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x14b0980 00:20:33.612 [2024-07-25 10:35:37.123779] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:33.612 [2024-07-25 10:35:37.124021] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14cf1a0 00:20:33.612 [2024-07-25 10:35:37.124239] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14b0980 00:20:33.612 [2024-07-25 10:35:37.124252] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14b0980 00:20:33.612 [2024-07-25 10:35:37.124382] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.612 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:33.612 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:33.612 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:33.612 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:33.612 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:33.612 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:33.612 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.612 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.613 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.613 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.613 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.613 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.871 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.871 "name": "raid_bdev1", 00:20:33.871 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:33.871 "strip_size_kb": 0, 00:20:33.871 "state": "online", 00:20:33.871 "raid_level": "raid1", 00:20:33.871 "superblock": true, 00:20:33.871 "num_base_bdevs": 4, 00:20:33.871 "num_base_bdevs_discovered": 4, 00:20:33.871 "num_base_bdevs_operational": 4, 00:20:33.871 "base_bdevs_list": [ 00:20:33.871 { 00:20:33.871 "name": "pt1", 00:20:33.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:33.871 "is_configured": true, 00:20:33.871 "data_offset": 2048, 00:20:33.871 "data_size": 63488 00:20:33.871 }, 00:20:33.871 { 00:20:33.871 "name": "pt2", 00:20:33.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:33.871 "is_configured": true, 00:20:33.871 "data_offset": 2048, 00:20:33.871 "data_size": 63488 00:20:33.871 }, 00:20:33.871 { 00:20:33.871 "name": "pt3", 00:20:33.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:33.871 "is_configured": true, 00:20:33.871 "data_offset": 2048, 00:20:33.871 "data_size": 63488 00:20:33.871 }, 00:20:33.871 { 00:20:33.871 "name": "pt4", 00:20:33.871 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:33.871 "is_configured": true, 00:20:33.871 "data_offset": 2048, 00:20:33.871 "data_size": 63488 00:20:33.871 } 00:20:33.871 ] 00:20:33.871 }' 00:20:33.871 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.871 10:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.436 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:20:34.436 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:34.436 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:34.436 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:34.436 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:34.436 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:34.436 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:34.436 10:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:34.695 [2024-07-25 10:35:38.189040] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.695 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:34.695 "name": "raid_bdev1", 00:20:34.695 "aliases": [ 00:20:34.695 "6d75d983-d840-4692-a660-ae5e68bd6160" 00:20:34.695 ], 00:20:34.695 "product_name": "Raid Volume", 00:20:34.695 "block_size": 512, 00:20:34.695 "num_blocks": 63488, 00:20:34.695 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:34.695 "assigned_rate_limits": { 00:20:34.695 "rw_ios_per_sec": 0, 00:20:34.695 "rw_mbytes_per_sec": 0, 00:20:34.695 "r_mbytes_per_sec": 0, 00:20:34.695 "w_mbytes_per_sec": 0 00:20:34.695 }, 00:20:34.695 "claimed": false, 00:20:34.695 "zoned": false, 00:20:34.695 "supported_io_types": { 00:20:34.695 "read": true, 00:20:34.695 "write": true, 00:20:34.695 "unmap": false, 00:20:34.695 "flush": false, 00:20:34.695 "reset": true, 00:20:34.695 "nvme_admin": false, 00:20:34.695 "nvme_io": false, 00:20:34.695 "nvme_io_md": false, 00:20:34.695 "write_zeroes": true, 00:20:34.695 "zcopy": false, 00:20:34.695 "get_zone_info": false, 00:20:34.695 "zone_management": false, 00:20:34.695 "zone_append": false, 00:20:34.695 "compare": false, 00:20:34.695 "compare_and_write": false, 00:20:34.695 "abort": false, 00:20:34.695 "seek_hole": false, 00:20:34.695 "seek_data": false, 00:20:34.695 "copy": false, 00:20:34.695 "nvme_iov_md": false 00:20:34.695 }, 00:20:34.695 "memory_domains": [ 00:20:34.695 { 00:20:34.695 "dma_device_id": "system", 00:20:34.695 "dma_device_type": 1 00:20:34.695 }, 00:20:34.695 { 00:20:34.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.695 "dma_device_type": 2 00:20:34.695 }, 00:20:34.695 { 00:20:34.695 "dma_device_id": "system", 00:20:34.695 "dma_device_type": 1 00:20:34.695 }, 00:20:34.695 { 00:20:34.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.695 "dma_device_type": 2 00:20:34.695 }, 00:20:34.695 { 00:20:34.695 "dma_device_id": "system", 00:20:34.695 "dma_device_type": 1 00:20:34.695 }, 00:20:34.695 { 00:20:34.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.695 "dma_device_type": 2 00:20:34.695 }, 00:20:34.695 { 00:20:34.695 "dma_device_id": "system", 00:20:34.695 "dma_device_type": 1 00:20:34.695 }, 00:20:34.695 { 00:20:34.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.695 "dma_device_type": 2 00:20:34.695 } 00:20:34.695 ], 00:20:34.695 "driver_specific": { 00:20:34.695 "raid": { 00:20:34.695 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:34.695 "strip_size_kb": 0, 00:20:34.695 "state": "online", 00:20:34.695 "raid_level": "raid1", 00:20:34.695 "superblock": true, 00:20:34.695 "num_base_bdevs": 4, 00:20:34.695 "num_base_bdevs_discovered": 4, 00:20:34.695 "num_base_bdevs_operational": 4, 00:20:34.695 "base_bdevs_list": [ 00:20:34.695 { 00:20:34.695 "name": "pt1", 00:20:34.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:34.695 "is_configured": true, 00:20:34.695 "data_offset": 2048, 00:20:34.695 "data_size": 63488 00:20:34.695 }, 00:20:34.695 { 00:20:34.695 "name": "pt2", 00:20:34.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:34.695 "is_configured": true, 00:20:34.695 "data_offset": 2048, 00:20:34.695 "data_size": 63488 00:20:34.695 }, 00:20:34.695 { 00:20:34.695 "name": "pt3", 00:20:34.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:34.695 "is_configured": true, 00:20:34.695 "data_offset": 2048, 00:20:34.695 "data_size": 63488 00:20:34.695 }, 00:20:34.695 { 00:20:34.695 "name": "pt4", 00:20:34.695 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:34.695 "is_configured": true, 00:20:34.695 "data_offset": 2048, 00:20:34.695 "data_size": 63488 00:20:34.695 } 00:20:34.695 ] 00:20:34.695 } 00:20:34.695 } 00:20:34.695 }' 00:20:34.695 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:34.695 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:34.695 pt2 00:20:34.695 pt3 00:20:34.695 pt4' 00:20:34.695 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:34.695 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:34.695 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:34.953 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:34.953 "name": "pt1", 00:20:34.953 "aliases": [ 00:20:34.953 "00000000-0000-0000-0000-000000000001" 00:20:34.953 ], 00:20:34.953 "product_name": "passthru", 00:20:34.953 "block_size": 512, 00:20:34.953 "num_blocks": 65536, 00:20:34.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:34.953 "assigned_rate_limits": { 00:20:34.953 "rw_ios_per_sec": 0, 00:20:34.953 "rw_mbytes_per_sec": 0, 00:20:34.953 "r_mbytes_per_sec": 0, 00:20:34.953 "w_mbytes_per_sec": 0 00:20:34.953 }, 00:20:34.953 "claimed": true, 00:20:34.953 "claim_type": "exclusive_write", 00:20:34.953 "zoned": false, 00:20:34.953 "supported_io_types": { 00:20:34.953 "read": true, 00:20:34.953 "write": true, 00:20:34.953 "unmap": true, 00:20:34.953 "flush": true, 00:20:34.953 "reset": true, 00:20:34.953 "nvme_admin": false, 00:20:34.953 "nvme_io": false, 00:20:34.953 "nvme_io_md": false, 00:20:34.953 "write_zeroes": true, 00:20:34.953 "zcopy": true, 00:20:34.953 "get_zone_info": false, 00:20:34.954 "zone_management": false, 00:20:34.954 "zone_append": false, 00:20:34.954 "compare": false, 00:20:34.954 "compare_and_write": false, 00:20:34.954 "abort": true, 00:20:34.954 "seek_hole": false, 00:20:34.954 "seek_data": false, 00:20:34.954 "copy": true, 00:20:34.954 "nvme_iov_md": false 00:20:34.954 }, 00:20:34.954 "memory_domains": [ 00:20:34.954 { 00:20:34.954 "dma_device_id": "system", 00:20:34.954 "dma_device_type": 1 00:20:34.954 }, 00:20:34.954 { 00:20:34.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.954 "dma_device_type": 2 00:20:34.954 } 00:20:34.954 ], 00:20:34.954 "driver_specific": { 00:20:34.954 "passthru": { 00:20:34.954 "name": "pt1", 00:20:34.954 "base_bdev_name": "malloc1" 00:20:34.954 } 00:20:34.954 } 00:20:34.954 }' 00:20:34.954 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:34.954 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:34.954 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:34.954 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:34.954 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:34.954 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:34.954 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.212 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.212 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:35.212 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.212 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.212 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:35.212 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:35.212 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:35.212 10:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:35.470 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:35.470 "name": "pt2", 00:20:35.470 "aliases": [ 00:20:35.470 "00000000-0000-0000-0000-000000000002" 00:20:35.470 ], 00:20:35.470 "product_name": "passthru", 00:20:35.470 "block_size": 512, 00:20:35.470 "num_blocks": 65536, 00:20:35.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:35.470 "assigned_rate_limits": { 00:20:35.470 "rw_ios_per_sec": 0, 00:20:35.470 "rw_mbytes_per_sec": 0, 00:20:35.470 "r_mbytes_per_sec": 0, 00:20:35.470 "w_mbytes_per_sec": 0 00:20:35.470 }, 00:20:35.470 "claimed": true, 00:20:35.470 "claim_type": "exclusive_write", 00:20:35.470 "zoned": false, 00:20:35.470 "supported_io_types": { 00:20:35.470 "read": true, 00:20:35.470 "write": true, 00:20:35.470 "unmap": true, 00:20:35.470 "flush": true, 00:20:35.470 "reset": true, 00:20:35.470 "nvme_admin": false, 00:20:35.470 "nvme_io": false, 00:20:35.470 "nvme_io_md": false, 00:20:35.470 "write_zeroes": true, 00:20:35.470 "zcopy": true, 00:20:35.470 "get_zone_info": false, 00:20:35.470 "zone_management": false, 00:20:35.470 "zone_append": false, 00:20:35.470 "compare": false, 00:20:35.470 "compare_and_write": false, 00:20:35.470 "abort": true, 00:20:35.470 "seek_hole": false, 00:20:35.470 "seek_data": false, 00:20:35.470 "copy": true, 00:20:35.470 "nvme_iov_md": false 00:20:35.470 }, 00:20:35.470 "memory_domains": [ 00:20:35.470 { 00:20:35.470 "dma_device_id": "system", 00:20:35.470 "dma_device_type": 1 00:20:35.470 }, 00:20:35.471 { 00:20:35.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.471 "dma_device_type": 2 00:20:35.471 } 00:20:35.471 ], 00:20:35.471 "driver_specific": { 00:20:35.471 "passthru": { 00:20:35.471 "name": "pt2", 00:20:35.471 "base_bdev_name": "malloc2" 00:20:35.471 } 00:20:35.471 } 00:20:35.471 }' 00:20:35.471 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.471 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.471 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:35.471 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.471 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.729 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:35.729 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.729 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.729 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:35.729 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.729 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.729 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:35.729 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:35.729 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:35.729 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:35.987 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:35.987 "name": "pt3", 00:20:35.987 "aliases": [ 00:20:35.987 "00000000-0000-0000-0000-000000000003" 00:20:35.987 ], 00:20:35.987 "product_name": "passthru", 00:20:35.987 "block_size": 512, 00:20:35.987 "num_blocks": 65536, 00:20:35.987 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:35.987 "assigned_rate_limits": { 00:20:35.987 "rw_ios_per_sec": 0, 00:20:35.987 "rw_mbytes_per_sec": 0, 00:20:35.987 "r_mbytes_per_sec": 0, 00:20:35.987 "w_mbytes_per_sec": 0 00:20:35.987 }, 00:20:35.987 "claimed": true, 00:20:35.987 "claim_type": "exclusive_write", 00:20:35.987 "zoned": false, 00:20:35.987 "supported_io_types": { 00:20:35.987 "read": true, 00:20:35.987 "write": true, 00:20:35.987 "unmap": true, 00:20:35.987 "flush": true, 00:20:35.987 "reset": true, 00:20:35.987 "nvme_admin": false, 00:20:35.987 "nvme_io": false, 00:20:35.987 "nvme_io_md": false, 00:20:35.987 "write_zeroes": true, 00:20:35.987 "zcopy": true, 00:20:35.987 "get_zone_info": false, 00:20:35.987 "zone_management": false, 00:20:35.987 "zone_append": false, 00:20:35.987 "compare": false, 00:20:35.987 "compare_and_write": false, 00:20:35.987 "abort": true, 00:20:35.987 "seek_hole": false, 00:20:35.987 "seek_data": false, 00:20:35.987 "copy": true, 00:20:35.987 "nvme_iov_md": false 00:20:35.987 }, 00:20:35.987 "memory_domains": [ 00:20:35.987 { 00:20:35.987 "dma_device_id": "system", 00:20:35.987 "dma_device_type": 1 00:20:35.987 }, 00:20:35.987 { 00:20:35.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.987 "dma_device_type": 2 00:20:35.987 } 00:20:35.987 ], 00:20:35.987 "driver_specific": { 00:20:35.987 "passthru": { 00:20:35.987 "name": "pt3", 00:20:35.987 "base_bdev_name": "malloc3" 00:20:35.987 } 00:20:35.987 } 00:20:35.987 }' 00:20:35.987 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.987 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.987 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:35.987 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.987 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.245 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:36.245 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.245 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.245 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:36.245 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.245 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.245 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.245 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:36.245 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:36.245 10:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:36.504 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:36.504 "name": "pt4", 00:20:36.504 "aliases": [ 00:20:36.504 "00000000-0000-0000-0000-000000000004" 00:20:36.504 ], 00:20:36.504 "product_name": "passthru", 00:20:36.504 "block_size": 512, 00:20:36.504 "num_blocks": 65536, 00:20:36.504 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:36.504 "assigned_rate_limits": { 00:20:36.504 "rw_ios_per_sec": 0, 00:20:36.504 "rw_mbytes_per_sec": 0, 00:20:36.504 "r_mbytes_per_sec": 0, 00:20:36.504 "w_mbytes_per_sec": 0 00:20:36.504 }, 00:20:36.504 "claimed": true, 00:20:36.504 "claim_type": "exclusive_write", 00:20:36.504 "zoned": false, 00:20:36.504 "supported_io_types": { 00:20:36.504 "read": true, 00:20:36.504 "write": true, 00:20:36.504 "unmap": true, 00:20:36.504 "flush": true, 00:20:36.504 "reset": true, 00:20:36.504 "nvme_admin": false, 00:20:36.504 "nvme_io": false, 00:20:36.504 "nvme_io_md": false, 00:20:36.504 "write_zeroes": true, 00:20:36.504 "zcopy": true, 00:20:36.504 "get_zone_info": false, 00:20:36.504 "zone_management": false, 00:20:36.504 "zone_append": false, 00:20:36.504 "compare": false, 00:20:36.504 "compare_and_write": false, 00:20:36.504 "abort": true, 00:20:36.504 "seek_hole": false, 00:20:36.504 "seek_data": false, 00:20:36.504 "copy": true, 00:20:36.504 "nvme_iov_md": false 00:20:36.504 }, 00:20:36.504 "memory_domains": [ 00:20:36.504 { 00:20:36.504 "dma_device_id": "system", 00:20:36.504 "dma_device_type": 1 00:20:36.504 }, 00:20:36.504 { 00:20:36.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.504 "dma_device_type": 2 00:20:36.504 } 00:20:36.504 ], 00:20:36.504 "driver_specific": { 00:20:36.504 "passthru": { 00:20:36.504 "name": "pt4", 00:20:36.504 "base_bdev_name": "malloc4" 00:20:36.504 } 00:20:36.504 } 00:20:36.504 }' 00:20:36.504 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.504 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.504 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:36.504 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.762 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.762 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:36.762 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.762 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.762 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:36.762 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.762 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.762 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.762 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:36.762 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:20:37.020 [2024-07-25 10:35:40.635565] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:37.020 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6d75d983-d840-4692-a660-ae5e68bd6160 00:20:37.020 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 6d75d983-d840-4692-a660-ae5e68bd6160 ']' 00:20:37.020 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:37.278 [2024-07-25 10:35:40.932033] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.278 [2024-07-25 10:35:40.932071] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.278 [2024-07-25 10:35:40.932168] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.278 [2024-07-25 10:35:40.932268] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.278 [2024-07-25 10:35:40.932285] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14b0980 name raid_bdev1, state offline 00:20:37.278 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.278 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:20:37.536 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:20:37.536 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:20:37.536 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:37.536 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:37.794 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:37.794 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:38.052 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:38.052 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:38.310 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:38.310 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:38.568 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:38.568 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:38.827 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:20:38.827 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:38.827 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:20:38.827 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:38.827 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:38.827 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.827 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:38.827 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.827 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:38.827 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.827 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:38.827 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:38.827 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:39.087 [2024-07-25 10:35:42.692657] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:39.087 [2024-07-25 10:35:42.694208] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:39.087 [2024-07-25 10:35:42.694252] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:39.087 [2024-07-25 10:35:42.694292] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:39.087 [2024-07-25 10:35:42.694350] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:39.087 [2024-07-25 10:35:42.694399] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:39.087 [2024-07-25 10:35:42.694436] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:39.087 [2024-07-25 10:35:42.694461] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:39.087 [2024-07-25 10:35:42.694481] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:39.087 [2024-07-25 10:35:42.694492] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16521b0 name raid_bdev1, state configuring 00:20:39.087 request: 00:20:39.087 { 00:20:39.087 "name": "raid_bdev1", 00:20:39.087 "raid_level": "raid1", 00:20:39.087 "base_bdevs": [ 00:20:39.087 "malloc1", 00:20:39.087 "malloc2", 00:20:39.087 "malloc3", 00:20:39.087 "malloc4" 00:20:39.087 ], 00:20:39.087 "superblock": false, 00:20:39.087 "method": "bdev_raid_create", 00:20:39.087 "req_id": 1 00:20:39.087 } 00:20:39.087 Got JSON-RPC error response 00:20:39.087 response: 00:20:39.087 { 00:20:39.087 "code": -17, 00:20:39.087 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:39.087 } 00:20:39.087 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:20:39.087 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:39.087 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:39.087 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:39.087 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.087 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:39.345 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:39.345 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:39.345 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:39.604 [2024-07-25 10:35:43.213948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:39.604 [2024-07-25 10:35:43.214004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.604 [2024-07-25 10:35:43.214025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1651700 00:20:39.604 [2024-07-25 10:35:43.214038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.604 [2024-07-25 10:35:43.215918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.604 [2024-07-25 10:35:43.215942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:39.604 [2024-07-25 10:35:43.216036] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:39.604 [2024-07-25 10:35:43.216074] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:39.604 pt1 00:20:39.604 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:39.604 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:39.604 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:39.604 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:39.604 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:39.604 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:39.604 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.604 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.604 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.604 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.604 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.604 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.861 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:39.862 "name": "raid_bdev1", 00:20:39.862 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:39.862 "strip_size_kb": 0, 00:20:39.862 "state": "configuring", 00:20:39.862 "raid_level": "raid1", 00:20:39.862 "superblock": true, 00:20:39.862 "num_base_bdevs": 4, 00:20:39.862 "num_base_bdevs_discovered": 1, 00:20:39.862 "num_base_bdevs_operational": 4, 00:20:39.862 "base_bdevs_list": [ 00:20:39.862 { 00:20:39.862 "name": "pt1", 00:20:39.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:39.862 "is_configured": true, 00:20:39.862 "data_offset": 2048, 00:20:39.862 "data_size": 63488 00:20:39.862 }, 00:20:39.862 { 00:20:39.862 "name": null, 00:20:39.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:39.862 "is_configured": false, 00:20:39.862 "data_offset": 2048, 00:20:39.862 "data_size": 63488 00:20:39.862 }, 00:20:39.862 { 00:20:39.862 "name": null, 00:20:39.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:39.862 "is_configured": false, 00:20:39.862 "data_offset": 2048, 00:20:39.862 "data_size": 63488 00:20:39.862 }, 00:20:39.862 { 00:20:39.862 "name": null, 00:20:39.862 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:39.862 "is_configured": false, 00:20:39.862 "data_offset": 2048, 00:20:39.862 "data_size": 63488 00:20:39.862 } 00:20:39.862 ] 00:20:39.862 }' 00:20:39.862 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:39.862 10:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.427 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:20:40.427 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:40.685 [2024-07-25 10:35:44.224764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:40.685 [2024-07-25 10:35:44.224832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.685 [2024-07-25 10:35:44.224852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x165b4b0 00:20:40.685 [2024-07-25 10:35:44.224864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.685 [2024-07-25 10:35:44.225253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.685 [2024-07-25 10:35:44.225274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:40.685 [2024-07-25 10:35:44.225344] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:40.685 [2024-07-25 10:35:44.225381] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:40.685 pt2 00:20:40.685 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:40.943 [2024-07-25 10:35:44.497527] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:40.943 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:40.943 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:40.943 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:40.943 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:40.943 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:40.943 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:40.943 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.943 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.943 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.943 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.943 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.943 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.201 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.201 "name": "raid_bdev1", 00:20:41.201 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:41.201 "strip_size_kb": 0, 00:20:41.201 "state": "configuring", 00:20:41.201 "raid_level": "raid1", 00:20:41.201 "superblock": true, 00:20:41.201 "num_base_bdevs": 4, 00:20:41.201 "num_base_bdevs_discovered": 1, 00:20:41.201 "num_base_bdevs_operational": 4, 00:20:41.201 "base_bdevs_list": [ 00:20:41.201 { 00:20:41.201 "name": "pt1", 00:20:41.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:41.202 "is_configured": true, 00:20:41.202 "data_offset": 2048, 00:20:41.202 "data_size": 63488 00:20:41.202 }, 00:20:41.202 { 00:20:41.202 "name": null, 00:20:41.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.202 "is_configured": false, 00:20:41.202 "data_offset": 2048, 00:20:41.202 "data_size": 63488 00:20:41.202 }, 00:20:41.202 { 00:20:41.202 "name": null, 00:20:41.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:41.202 "is_configured": false, 00:20:41.202 "data_offset": 2048, 00:20:41.202 "data_size": 63488 00:20:41.202 }, 00:20:41.202 { 00:20:41.202 "name": null, 00:20:41.202 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:41.202 "is_configured": false, 00:20:41.202 "data_offset": 2048, 00:20:41.202 "data_size": 63488 00:20:41.202 } 00:20:41.202 ] 00:20:41.202 }' 00:20:41.202 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.202 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.767 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:41.767 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:41.767 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:42.026 [2024-07-25 10:35:45.552296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:42.026 [2024-07-25 10:35:45.552356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.026 [2024-07-25 10:35:45.552377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b03e0 00:20:42.026 [2024-07-25 10:35:45.552404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.026 [2024-07-25 10:35:45.552810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.026 [2024-07-25 10:35:45.552833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:42.026 [2024-07-25 10:35:45.552912] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:42.026 [2024-07-25 10:35:45.552939] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:42.026 pt2 00:20:42.026 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:42.026 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:42.026 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:42.284 [2024-07-25 10:35:45.796911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:42.284 [2024-07-25 10:35:45.796945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.284 [2024-07-25 10:35:45.796981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b15c0 00:20:42.284 [2024-07-25 10:35:45.796993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.284 [2024-07-25 10:35:45.797251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.284 [2024-07-25 10:35:45.797274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:42.284 [2024-07-25 10:35:45.797324] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:42.284 [2024-07-25 10:35:45.797345] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:42.284 pt3 00:20:42.284 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:42.284 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:42.284 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:42.543 [2024-07-25 10:35:46.041546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:42.543 [2024-07-25 10:35:46.041578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.543 [2024-07-25 10:35:46.041594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b99b0 00:20:42.543 [2024-07-25 10:35:46.041605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.543 [2024-07-25 10:35:46.041836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.543 [2024-07-25 10:35:46.041857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:42.543 [2024-07-25 10:35:46.041903] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:42.543 [2024-07-25 10:35:46.041924] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:42.543 [2024-07-25 10:35:46.042037] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x14b1860 00:20:42.543 [2024-07-25 10:35:46.042051] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:42.543 [2024-07-25 10:35:46.042198] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14b7f30 00:20:42.543 [2024-07-25 10:35:46.042329] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14b1860 00:20:42.543 [2024-07-25 10:35:46.042342] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14b1860 00:20:42.543 [2024-07-25 10:35:46.042439] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.543 pt4 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.543 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.802 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.802 "name": "raid_bdev1", 00:20:42.802 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:42.802 "strip_size_kb": 0, 00:20:42.802 "state": "online", 00:20:42.802 "raid_level": "raid1", 00:20:42.802 "superblock": true, 00:20:42.802 "num_base_bdevs": 4, 00:20:42.802 "num_base_bdevs_discovered": 4, 00:20:42.802 "num_base_bdevs_operational": 4, 00:20:42.802 "base_bdevs_list": [ 00:20:42.802 { 00:20:42.802 "name": "pt1", 00:20:42.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:42.802 "is_configured": true, 00:20:42.802 "data_offset": 2048, 00:20:42.802 "data_size": 63488 00:20:42.802 }, 00:20:42.802 { 00:20:42.802 "name": "pt2", 00:20:42.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.802 "is_configured": true, 00:20:42.802 "data_offset": 2048, 00:20:42.802 "data_size": 63488 00:20:42.802 }, 00:20:42.802 { 00:20:42.803 "name": "pt3", 00:20:42.803 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:42.803 "is_configured": true, 00:20:42.803 "data_offset": 2048, 00:20:42.803 "data_size": 63488 00:20:42.803 }, 00:20:42.803 { 00:20:42.803 "name": "pt4", 00:20:42.803 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:42.803 "is_configured": true, 00:20:42.803 "data_offset": 2048, 00:20:42.803 "data_size": 63488 00:20:42.803 } 00:20:42.803 ] 00:20:42.803 }' 00:20:42.803 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.803 10:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.369 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:43.369 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:43.369 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:43.369 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:43.369 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:43.369 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:43.369 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:43.369 10:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:43.370 [2024-07-25 10:35:47.044476] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:43.370 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:43.370 "name": "raid_bdev1", 00:20:43.370 "aliases": [ 00:20:43.370 "6d75d983-d840-4692-a660-ae5e68bd6160" 00:20:43.370 ], 00:20:43.370 "product_name": "Raid Volume", 00:20:43.370 "block_size": 512, 00:20:43.370 "num_blocks": 63488, 00:20:43.370 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:43.370 "assigned_rate_limits": { 00:20:43.370 "rw_ios_per_sec": 0, 00:20:43.370 "rw_mbytes_per_sec": 0, 00:20:43.370 "r_mbytes_per_sec": 0, 00:20:43.370 "w_mbytes_per_sec": 0 00:20:43.370 }, 00:20:43.370 "claimed": false, 00:20:43.370 "zoned": false, 00:20:43.370 "supported_io_types": { 00:20:43.370 "read": true, 00:20:43.370 "write": true, 00:20:43.370 "unmap": false, 00:20:43.370 "flush": false, 00:20:43.370 "reset": true, 00:20:43.370 "nvme_admin": false, 00:20:43.370 "nvme_io": false, 00:20:43.370 "nvme_io_md": false, 00:20:43.370 "write_zeroes": true, 00:20:43.370 "zcopy": false, 00:20:43.370 "get_zone_info": false, 00:20:43.370 "zone_management": false, 00:20:43.370 "zone_append": false, 00:20:43.370 "compare": false, 00:20:43.370 "compare_and_write": false, 00:20:43.370 "abort": false, 00:20:43.370 "seek_hole": false, 00:20:43.370 "seek_data": false, 00:20:43.370 "copy": false, 00:20:43.370 "nvme_iov_md": false 00:20:43.370 }, 00:20:43.370 "memory_domains": [ 00:20:43.370 { 00:20:43.370 "dma_device_id": "system", 00:20:43.370 "dma_device_type": 1 00:20:43.370 }, 00:20:43.370 { 00:20:43.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.370 "dma_device_type": 2 00:20:43.370 }, 00:20:43.370 { 00:20:43.370 "dma_device_id": "system", 00:20:43.370 "dma_device_type": 1 00:20:43.370 }, 00:20:43.370 { 00:20:43.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.370 "dma_device_type": 2 00:20:43.370 }, 00:20:43.370 { 00:20:43.370 "dma_device_id": "system", 00:20:43.370 "dma_device_type": 1 00:20:43.370 }, 00:20:43.370 { 00:20:43.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.370 "dma_device_type": 2 00:20:43.370 }, 00:20:43.370 { 00:20:43.370 "dma_device_id": "system", 00:20:43.370 "dma_device_type": 1 00:20:43.370 }, 00:20:43.370 { 00:20:43.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.370 "dma_device_type": 2 00:20:43.370 } 00:20:43.370 ], 00:20:43.370 "driver_specific": { 00:20:43.370 "raid": { 00:20:43.370 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:43.370 "strip_size_kb": 0, 00:20:43.370 "state": "online", 00:20:43.370 "raid_level": "raid1", 00:20:43.370 "superblock": true, 00:20:43.370 "num_base_bdevs": 4, 00:20:43.370 "num_base_bdevs_discovered": 4, 00:20:43.370 "num_base_bdevs_operational": 4, 00:20:43.370 "base_bdevs_list": [ 00:20:43.370 { 00:20:43.370 "name": "pt1", 00:20:43.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:43.370 "is_configured": true, 00:20:43.370 "data_offset": 2048, 00:20:43.370 "data_size": 63488 00:20:43.370 }, 00:20:43.370 { 00:20:43.370 "name": "pt2", 00:20:43.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:43.370 "is_configured": true, 00:20:43.370 "data_offset": 2048, 00:20:43.370 "data_size": 63488 00:20:43.370 }, 00:20:43.370 { 00:20:43.370 "name": "pt3", 00:20:43.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:43.370 "is_configured": true, 00:20:43.370 "data_offset": 2048, 00:20:43.370 "data_size": 63488 00:20:43.370 }, 00:20:43.370 { 00:20:43.370 "name": "pt4", 00:20:43.370 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:43.370 "is_configured": true, 00:20:43.370 "data_offset": 2048, 00:20:43.370 "data_size": 63488 00:20:43.370 } 00:20:43.370 ] 00:20:43.370 } 00:20:43.370 } 00:20:43.370 }' 00:20:43.370 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:43.629 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:43.629 pt2 00:20:43.629 pt3 00:20:43.629 pt4' 00:20:43.629 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:43.629 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:43.629 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:43.629 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:43.629 "name": "pt1", 00:20:43.629 "aliases": [ 00:20:43.629 "00000000-0000-0000-0000-000000000001" 00:20:43.629 ], 00:20:43.629 "product_name": "passthru", 00:20:43.629 "block_size": 512, 00:20:43.629 "num_blocks": 65536, 00:20:43.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:43.629 "assigned_rate_limits": { 00:20:43.629 "rw_ios_per_sec": 0, 00:20:43.629 "rw_mbytes_per_sec": 0, 00:20:43.629 "r_mbytes_per_sec": 0, 00:20:43.629 "w_mbytes_per_sec": 0 00:20:43.629 }, 00:20:43.629 "claimed": true, 00:20:43.629 "claim_type": "exclusive_write", 00:20:43.629 "zoned": false, 00:20:43.629 "supported_io_types": { 00:20:43.629 "read": true, 00:20:43.629 "write": true, 00:20:43.629 "unmap": true, 00:20:43.629 "flush": true, 00:20:43.629 "reset": true, 00:20:43.629 "nvme_admin": false, 00:20:43.629 "nvme_io": false, 00:20:43.629 "nvme_io_md": false, 00:20:43.629 "write_zeroes": true, 00:20:43.629 "zcopy": true, 00:20:43.629 "get_zone_info": false, 00:20:43.629 "zone_management": false, 00:20:43.629 "zone_append": false, 00:20:43.629 "compare": false, 00:20:43.629 "compare_and_write": false, 00:20:43.629 "abort": true, 00:20:43.629 "seek_hole": false, 00:20:43.629 "seek_data": false, 00:20:43.629 "copy": true, 00:20:43.629 "nvme_iov_md": false 00:20:43.629 }, 00:20:43.629 "memory_domains": [ 00:20:43.629 { 00:20:43.629 "dma_device_id": "system", 00:20:43.629 "dma_device_type": 1 00:20:43.629 }, 00:20:43.629 { 00:20:43.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.629 "dma_device_type": 2 00:20:43.629 } 00:20:43.629 ], 00:20:43.629 "driver_specific": { 00:20:43.629 "passthru": { 00:20:43.629 "name": "pt1", 00:20:43.629 "base_bdev_name": "malloc1" 00:20:43.629 } 00:20:43.629 } 00:20:43.629 }' 00:20:43.629 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:43.887 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:43.887 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:43.888 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:43.888 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:43.888 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:43.888 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:43.888 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:43.888 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:43.888 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:44.145 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:44.145 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:44.145 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:44.145 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:44.145 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:44.423 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:44.423 "name": "pt2", 00:20:44.423 "aliases": [ 00:20:44.423 "00000000-0000-0000-0000-000000000002" 00:20:44.423 ], 00:20:44.423 "product_name": "passthru", 00:20:44.423 "block_size": 512, 00:20:44.423 "num_blocks": 65536, 00:20:44.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:44.423 "assigned_rate_limits": { 00:20:44.423 "rw_ios_per_sec": 0, 00:20:44.423 "rw_mbytes_per_sec": 0, 00:20:44.423 "r_mbytes_per_sec": 0, 00:20:44.423 "w_mbytes_per_sec": 0 00:20:44.423 }, 00:20:44.423 "claimed": true, 00:20:44.423 "claim_type": "exclusive_write", 00:20:44.423 "zoned": false, 00:20:44.423 "supported_io_types": { 00:20:44.423 "read": true, 00:20:44.423 "write": true, 00:20:44.423 "unmap": true, 00:20:44.423 "flush": true, 00:20:44.423 "reset": true, 00:20:44.423 "nvme_admin": false, 00:20:44.423 "nvme_io": false, 00:20:44.423 "nvme_io_md": false, 00:20:44.423 "write_zeroes": true, 00:20:44.423 "zcopy": true, 00:20:44.423 "get_zone_info": false, 00:20:44.423 "zone_management": false, 00:20:44.423 "zone_append": false, 00:20:44.423 "compare": false, 00:20:44.423 "compare_and_write": false, 00:20:44.423 "abort": true, 00:20:44.423 "seek_hole": false, 00:20:44.423 "seek_data": false, 00:20:44.423 "copy": true, 00:20:44.423 "nvme_iov_md": false 00:20:44.423 }, 00:20:44.423 "memory_domains": [ 00:20:44.423 { 00:20:44.423 "dma_device_id": "system", 00:20:44.423 "dma_device_type": 1 00:20:44.423 }, 00:20:44.423 { 00:20:44.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.423 "dma_device_type": 2 00:20:44.423 } 00:20:44.423 ], 00:20:44.423 "driver_specific": { 00:20:44.423 "passthru": { 00:20:44.423 "name": "pt2", 00:20:44.423 "base_bdev_name": "malloc2" 00:20:44.423 } 00:20:44.423 } 00:20:44.423 }' 00:20:44.423 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:44.423 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:44.423 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:44.423 10:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:44.423 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:44.423 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:44.423 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:44.423 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:44.699 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:44.699 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:44.699 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:44.699 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:44.699 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:44.699 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:44.699 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:44.957 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:44.957 "name": "pt3", 00:20:44.957 "aliases": [ 00:20:44.957 "00000000-0000-0000-0000-000000000003" 00:20:44.957 ], 00:20:44.957 "product_name": "passthru", 00:20:44.957 "block_size": 512, 00:20:44.957 "num_blocks": 65536, 00:20:44.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:44.957 "assigned_rate_limits": { 00:20:44.957 "rw_ios_per_sec": 0, 00:20:44.957 "rw_mbytes_per_sec": 0, 00:20:44.957 "r_mbytes_per_sec": 0, 00:20:44.957 "w_mbytes_per_sec": 0 00:20:44.957 }, 00:20:44.957 "claimed": true, 00:20:44.957 "claim_type": "exclusive_write", 00:20:44.957 "zoned": false, 00:20:44.957 "supported_io_types": { 00:20:44.957 "read": true, 00:20:44.957 "write": true, 00:20:44.957 "unmap": true, 00:20:44.957 "flush": true, 00:20:44.957 "reset": true, 00:20:44.957 "nvme_admin": false, 00:20:44.957 "nvme_io": false, 00:20:44.957 "nvme_io_md": false, 00:20:44.957 "write_zeroes": true, 00:20:44.957 "zcopy": true, 00:20:44.957 "get_zone_info": false, 00:20:44.957 "zone_management": false, 00:20:44.957 "zone_append": false, 00:20:44.957 "compare": false, 00:20:44.957 "compare_and_write": false, 00:20:44.957 "abort": true, 00:20:44.957 "seek_hole": false, 00:20:44.957 "seek_data": false, 00:20:44.957 "copy": true, 00:20:44.957 "nvme_iov_md": false 00:20:44.957 }, 00:20:44.957 "memory_domains": [ 00:20:44.957 { 00:20:44.957 "dma_device_id": "system", 00:20:44.957 "dma_device_type": 1 00:20:44.957 }, 00:20:44.957 { 00:20:44.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.957 "dma_device_type": 2 00:20:44.957 } 00:20:44.957 ], 00:20:44.957 "driver_specific": { 00:20:44.957 "passthru": { 00:20:44.957 "name": "pt3", 00:20:44.957 "base_bdev_name": "malloc3" 00:20:44.957 } 00:20:44.957 } 00:20:44.957 }' 00:20:44.957 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:44.957 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:44.957 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:44.957 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:44.957 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:44.957 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:44.957 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:44.957 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:44.957 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:44.957 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:45.215 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:45.215 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:45.215 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:45.215 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:45.215 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:45.471 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:45.471 "name": "pt4", 00:20:45.471 "aliases": [ 00:20:45.471 "00000000-0000-0000-0000-000000000004" 00:20:45.471 ], 00:20:45.471 "product_name": "passthru", 00:20:45.471 "block_size": 512, 00:20:45.471 "num_blocks": 65536, 00:20:45.471 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:45.471 "assigned_rate_limits": { 00:20:45.471 "rw_ios_per_sec": 0, 00:20:45.471 "rw_mbytes_per_sec": 0, 00:20:45.471 "r_mbytes_per_sec": 0, 00:20:45.471 "w_mbytes_per_sec": 0 00:20:45.471 }, 00:20:45.471 "claimed": true, 00:20:45.471 "claim_type": "exclusive_write", 00:20:45.471 "zoned": false, 00:20:45.471 "supported_io_types": { 00:20:45.471 "read": true, 00:20:45.471 "write": true, 00:20:45.471 "unmap": true, 00:20:45.471 "flush": true, 00:20:45.471 "reset": true, 00:20:45.471 "nvme_admin": false, 00:20:45.471 "nvme_io": false, 00:20:45.471 "nvme_io_md": false, 00:20:45.471 "write_zeroes": true, 00:20:45.471 "zcopy": true, 00:20:45.471 "get_zone_info": false, 00:20:45.471 "zone_management": false, 00:20:45.471 "zone_append": false, 00:20:45.471 "compare": false, 00:20:45.471 "compare_and_write": false, 00:20:45.471 "abort": true, 00:20:45.471 "seek_hole": false, 00:20:45.471 "seek_data": false, 00:20:45.471 "copy": true, 00:20:45.471 "nvme_iov_md": false 00:20:45.471 }, 00:20:45.471 "memory_domains": [ 00:20:45.471 { 00:20:45.471 "dma_device_id": "system", 00:20:45.471 "dma_device_type": 1 00:20:45.471 }, 00:20:45.471 { 00:20:45.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.471 "dma_device_type": 2 00:20:45.471 } 00:20:45.471 ], 00:20:45.471 "driver_specific": { 00:20:45.471 "passthru": { 00:20:45.471 "name": "pt4", 00:20:45.471 "base_bdev_name": "malloc4" 00:20:45.471 } 00:20:45.471 } 00:20:45.471 }' 00:20:45.471 10:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:45.471 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:45.471 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:45.471 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:45.471 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:45.471 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:45.471 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:45.471 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:45.729 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:45.729 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:45.729 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:45.729 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:45.729 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:45.729 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:45.987 [2024-07-25 10:35:49.502948] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:45.987 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 6d75d983-d840-4692-a660-ae5e68bd6160 '!=' 6d75d983-d840-4692-a660-ae5e68bd6160 ']' 00:20:45.987 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:20:45.987 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:45.987 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:45.987 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:46.245 [2024-07-25 10:35:49.747391] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:46.245 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:46.245 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:46.245 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:46.245 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:46.245 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:46.245 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:46.245 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.245 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.245 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.245 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.245 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.245 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.503 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.503 "name": "raid_bdev1", 00:20:46.503 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:46.503 "strip_size_kb": 0, 00:20:46.503 "state": "online", 00:20:46.503 "raid_level": "raid1", 00:20:46.503 "superblock": true, 00:20:46.503 "num_base_bdevs": 4, 00:20:46.503 "num_base_bdevs_discovered": 3, 00:20:46.503 "num_base_bdevs_operational": 3, 00:20:46.503 "base_bdevs_list": [ 00:20:46.503 { 00:20:46.503 "name": null, 00:20:46.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.503 "is_configured": false, 00:20:46.503 "data_offset": 2048, 00:20:46.503 "data_size": 63488 00:20:46.503 }, 00:20:46.503 { 00:20:46.503 "name": "pt2", 00:20:46.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:46.503 "is_configured": true, 00:20:46.503 "data_offset": 2048, 00:20:46.503 "data_size": 63488 00:20:46.503 }, 00:20:46.503 { 00:20:46.503 "name": "pt3", 00:20:46.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:46.503 "is_configured": true, 00:20:46.503 "data_offset": 2048, 00:20:46.503 "data_size": 63488 00:20:46.503 }, 00:20:46.503 { 00:20:46.503 "name": "pt4", 00:20:46.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:46.503 "is_configured": true, 00:20:46.503 "data_offset": 2048, 00:20:46.503 "data_size": 63488 00:20:46.503 } 00:20:46.503 ] 00:20:46.503 }' 00:20:46.503 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.503 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.068 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:47.326 [2024-07-25 10:35:50.830204] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:47.326 [2024-07-25 10:35:50.830233] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:47.326 [2024-07-25 10:35:50.830292] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:47.326 [2024-07-25 10:35:50.830364] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:47.326 [2024-07-25 10:35:50.830377] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14b1860 name raid_bdev1, state offline 00:20:47.326 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.326 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:20:47.583 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:20:47.583 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:20:47.583 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:20:47.583 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:47.583 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:47.841 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:20:47.841 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:47.841 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:48.099 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:20:48.099 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:48.099 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:48.356 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:20:48.356 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:48.356 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:20:48.356 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:20:48.356 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:48.613 [2024-07-25 10:35:52.201735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:48.613 [2024-07-25 10:35:52.201789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.613 [2024-07-25 10:35:52.201809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16525b0 00:20:48.613 [2024-07-25 10:35:52.201822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.613 [2024-07-25 10:35:52.203692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.613 [2024-07-25 10:35:52.203716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:48.613 [2024-07-25 10:35:52.203790] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:48.613 [2024-07-25 10:35:52.203829] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:48.613 pt2 00:20:48.613 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:48.613 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:48.613 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.613 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:48.613 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:48.613 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:48.613 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.613 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.613 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.613 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.613 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.613 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.870 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.870 "name": "raid_bdev1", 00:20:48.870 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:48.870 "strip_size_kb": 0, 00:20:48.870 "state": "configuring", 00:20:48.870 "raid_level": "raid1", 00:20:48.870 "superblock": true, 00:20:48.870 "num_base_bdevs": 4, 00:20:48.870 "num_base_bdevs_discovered": 1, 00:20:48.870 "num_base_bdevs_operational": 3, 00:20:48.870 "base_bdevs_list": [ 00:20:48.870 { 00:20:48.870 "name": null, 00:20:48.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.870 "is_configured": false, 00:20:48.870 "data_offset": 2048, 00:20:48.870 "data_size": 63488 00:20:48.870 }, 00:20:48.870 { 00:20:48.870 "name": "pt2", 00:20:48.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:48.870 "is_configured": true, 00:20:48.870 "data_offset": 2048, 00:20:48.870 "data_size": 63488 00:20:48.870 }, 00:20:48.870 { 00:20:48.870 "name": null, 00:20:48.870 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:48.870 "is_configured": false, 00:20:48.870 "data_offset": 2048, 00:20:48.870 "data_size": 63488 00:20:48.870 }, 00:20:48.870 { 00:20:48.870 "name": null, 00:20:48.870 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:48.870 "is_configured": false, 00:20:48.870 "data_offset": 2048, 00:20:48.870 "data_size": 63488 00:20:48.870 } 00:20:48.870 ] 00:20:48.870 }' 00:20:48.870 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.871 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.435 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:20:49.435 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:20:49.435 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:49.693 [2024-07-25 10:35:53.236458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:49.693 [2024-07-25 10:35:53.236532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.693 [2024-07-25 10:35:53.236555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x164fdf0 00:20:49.693 [2024-07-25 10:35:53.236568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.693 [2024-07-25 10:35:53.236947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.693 [2024-07-25 10:35:53.236968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:49.693 [2024-07-25 10:35:53.237036] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:49.693 [2024-07-25 10:35:53.237061] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:49.693 pt3 00:20:49.693 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:49.693 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:49.693 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:49.693 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:49.693 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:49.693 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:49.693 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.693 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.693 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.693 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.693 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.693 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.951 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.951 "name": "raid_bdev1", 00:20:49.951 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:49.951 "strip_size_kb": 0, 00:20:49.951 "state": "configuring", 00:20:49.951 "raid_level": "raid1", 00:20:49.951 "superblock": true, 00:20:49.951 "num_base_bdevs": 4, 00:20:49.951 "num_base_bdevs_discovered": 2, 00:20:49.951 "num_base_bdevs_operational": 3, 00:20:49.951 "base_bdevs_list": [ 00:20:49.951 { 00:20:49.951 "name": null, 00:20:49.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.951 "is_configured": false, 00:20:49.951 "data_offset": 2048, 00:20:49.951 "data_size": 63488 00:20:49.951 }, 00:20:49.951 { 00:20:49.951 "name": "pt2", 00:20:49.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:49.951 "is_configured": true, 00:20:49.951 "data_offset": 2048, 00:20:49.951 "data_size": 63488 00:20:49.951 }, 00:20:49.951 { 00:20:49.951 "name": "pt3", 00:20:49.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:49.951 "is_configured": true, 00:20:49.951 "data_offset": 2048, 00:20:49.951 "data_size": 63488 00:20:49.951 }, 00:20:49.951 { 00:20:49.951 "name": null, 00:20:49.951 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:49.951 "is_configured": false, 00:20:49.951 "data_offset": 2048, 00:20:49.951 "data_size": 63488 00:20:49.951 } 00:20:49.951 ] 00:20:49.951 }' 00:20:49.951 10:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.951 10:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.516 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:20:50.516 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:20:50.516 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:20:50.516 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:50.774 [2024-07-25 10:35:54.359422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:50.774 [2024-07-25 10:35:54.359506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.774 [2024-07-25 10:35:54.359526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b45d0 00:20:50.774 [2024-07-25 10:35:54.359538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.774 [2024-07-25 10:35:54.359948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.774 [2024-07-25 10:35:54.359970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:50.774 [2024-07-25 10:35:54.360041] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:50.774 [2024-07-25 10:35:54.360066] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:50.774 [2024-07-25 10:35:54.360219] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x14b8c70 00:20:50.774 [2024-07-25 10:35:54.360234] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:50.774 [2024-07-25 10:35:54.360395] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14b7730 00:20:50.774 [2024-07-25 10:35:54.360554] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14b8c70 00:20:50.774 [2024-07-25 10:35:54.360566] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14b8c70 00:20:50.774 [2024-07-25 10:35:54.360653] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.774 pt4 00:20:50.774 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:50.774 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:50.774 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:50.774 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:50.774 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:50.774 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:50.774 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.774 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.774 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.774 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.774 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.774 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.032 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.032 "name": "raid_bdev1", 00:20:51.032 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:51.032 "strip_size_kb": 0, 00:20:51.032 "state": "online", 00:20:51.032 "raid_level": "raid1", 00:20:51.032 "superblock": true, 00:20:51.032 "num_base_bdevs": 4, 00:20:51.032 "num_base_bdevs_discovered": 3, 00:20:51.032 "num_base_bdevs_operational": 3, 00:20:51.032 "base_bdevs_list": [ 00:20:51.032 { 00:20:51.032 "name": null, 00:20:51.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.032 "is_configured": false, 00:20:51.032 "data_offset": 2048, 00:20:51.032 "data_size": 63488 00:20:51.032 }, 00:20:51.032 { 00:20:51.032 "name": "pt2", 00:20:51.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:51.032 "is_configured": true, 00:20:51.032 "data_offset": 2048, 00:20:51.032 "data_size": 63488 00:20:51.032 }, 00:20:51.032 { 00:20:51.032 "name": "pt3", 00:20:51.032 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:51.032 "is_configured": true, 00:20:51.032 "data_offset": 2048, 00:20:51.032 "data_size": 63488 00:20:51.032 }, 00:20:51.032 { 00:20:51.032 "name": "pt4", 00:20:51.032 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:51.032 "is_configured": true, 00:20:51.032 "data_offset": 2048, 00:20:51.032 "data_size": 63488 00:20:51.032 } 00:20:51.032 ] 00:20:51.032 }' 00:20:51.032 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.032 10:35:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.597 10:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:51.855 [2024-07-25 10:35:55.390105] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:51.855 [2024-07-25 10:35:55.390129] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:51.855 [2024-07-25 10:35:55.390188] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.855 [2024-07-25 10:35:55.390257] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:51.855 [2024-07-25 10:35:55.390270] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14b8c70 name raid_bdev1, state offline 00:20:51.855 10:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.855 10:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:20:52.113 10:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:20:52.113 10:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:20:52.113 10:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:20:52.113 10:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:20:52.113 10:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:52.372 10:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:52.630 [2024-07-25 10:35:56.140039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:52.630 [2024-07-25 10:35:56.140109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.630 [2024-07-25 10:35:56.140137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b45d0 00:20:52.630 [2024-07-25 10:35:56.140164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.630 [2024-07-25 10:35:56.141844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.630 [2024-07-25 10:35:56.141867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:52.630 [2024-07-25 10:35:56.141933] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:52.630 [2024-07-25 10:35:56.141978] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:52.630 [2024-07-25 10:35:56.142093] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:52.630 [2024-07-25 10:35:56.142118] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:52.630 [2024-07-25 10:35:56.142133] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14af080 name raid_bdev1, state configuring 00:20:52.630 [2024-07-25 10:35:56.142171] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:52.630 [2024-07-25 10:35:56.142253] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:52.630 pt1 00:20:52.630 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:20:52.630 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:52.630 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:52.630 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:52.630 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:52.630 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:52.630 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:52.630 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.630 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.630 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.630 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.630 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.630 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.889 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.889 "name": "raid_bdev1", 00:20:52.889 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:52.889 "strip_size_kb": 0, 00:20:52.889 "state": "configuring", 00:20:52.889 "raid_level": "raid1", 00:20:52.889 "superblock": true, 00:20:52.889 "num_base_bdevs": 4, 00:20:52.889 "num_base_bdevs_discovered": 2, 00:20:52.889 "num_base_bdevs_operational": 3, 00:20:52.889 "base_bdevs_list": [ 00:20:52.889 { 00:20:52.889 "name": null, 00:20:52.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.889 "is_configured": false, 00:20:52.889 "data_offset": 2048, 00:20:52.889 "data_size": 63488 00:20:52.889 }, 00:20:52.889 { 00:20:52.889 "name": "pt2", 00:20:52.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:52.889 "is_configured": true, 00:20:52.889 "data_offset": 2048, 00:20:52.889 "data_size": 63488 00:20:52.889 }, 00:20:52.889 { 00:20:52.889 "name": "pt3", 00:20:52.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:52.889 "is_configured": true, 00:20:52.889 "data_offset": 2048, 00:20:52.889 "data_size": 63488 00:20:52.889 }, 00:20:52.889 { 00:20:52.889 "name": null, 00:20:52.889 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:52.889 "is_configured": false, 00:20:52.889 "data_offset": 2048, 00:20:52.889 "data_size": 63488 00:20:52.889 } 00:20:52.889 ] 00:20:52.889 }' 00:20:52.889 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.889 10:35:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.455 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:20:53.455 10:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:53.714 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:20:53.714 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:53.714 [2024-07-25 10:35:57.403419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:53.714 [2024-07-25 10:35:57.403493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.714 [2024-07-25 10:35:57.403514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b92d0 00:20:53.714 [2024-07-25 10:35:57.403526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.714 [2024-07-25 10:35:57.403899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.714 [2024-07-25 10:35:57.403920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:53.714 [2024-07-25 10:35:57.403990] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:53.714 [2024-07-25 10:35:57.404016] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:53.714 [2024-07-25 10:35:57.404151] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x14b98e0 00:20:53.714 [2024-07-25 10:35:57.404165] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:53.714 [2024-07-25 10:35:57.404303] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14b5680 00:20:53.714 [2024-07-25 10:35:57.404456] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14b98e0 00:20:53.714 [2024-07-25 10:35:57.404469] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14b98e0 00:20:53.714 [2024-07-25 10:35:57.404556] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.714 pt4 00:20:53.714 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:53.714 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:53.714 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:53.714 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:53.714 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:53.714 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:53.714 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:53.715 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:53.715 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:53.715 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:53.973 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.973 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.973 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.973 "name": "raid_bdev1", 00:20:53.973 "uuid": "6d75d983-d840-4692-a660-ae5e68bd6160", 00:20:53.973 "strip_size_kb": 0, 00:20:53.973 "state": "online", 00:20:53.973 "raid_level": "raid1", 00:20:53.973 "superblock": true, 00:20:53.973 "num_base_bdevs": 4, 00:20:53.973 "num_base_bdevs_discovered": 3, 00:20:53.973 "num_base_bdevs_operational": 3, 00:20:53.973 "base_bdevs_list": [ 00:20:53.973 { 00:20:53.973 "name": null, 00:20:53.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.973 "is_configured": false, 00:20:53.973 "data_offset": 2048, 00:20:53.973 "data_size": 63488 00:20:53.973 }, 00:20:53.973 { 00:20:53.973 "name": "pt2", 00:20:53.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:53.973 "is_configured": true, 00:20:53.973 "data_offset": 2048, 00:20:53.973 "data_size": 63488 00:20:53.973 }, 00:20:53.973 { 00:20:53.973 "name": "pt3", 00:20:53.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:53.973 "is_configured": true, 00:20:53.973 "data_offset": 2048, 00:20:53.973 "data_size": 63488 00:20:53.973 }, 00:20:53.973 { 00:20:53.973 "name": "pt4", 00:20:53.973 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:53.973 "is_configured": true, 00:20:53.973 "data_offset": 2048, 00:20:53.973 "data_size": 63488 00:20:53.973 } 00:20:53.973 ] 00:20:53.973 }' 00:20:53.973 10:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.973 10:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.538 10:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:20:54.538 10:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:54.796 10:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:20:54.796 10:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:54.796 10:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:20:55.056 [2024-07-25 10:35:58.695047] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.056 10:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 6d75d983-d840-4692-a660-ae5e68bd6160 '!=' 6d75d983-d840-4692-a660-ae5e68bd6160 ']' 00:20:55.056 10:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2424609 00:20:55.056 10:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2424609 ']' 00:20:55.056 10:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2424609 00:20:55.056 10:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:20:55.056 10:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:55.056 10:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2424609 00:20:55.056 10:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:55.056 10:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:55.056 10:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2424609' 00:20:55.056 killing process with pid 2424609 00:20:55.056 10:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2424609 00:20:55.056 [2024-07-25 10:35:58.738475] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:55.056 10:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2424609 00:20:55.056 [2024-07-25 10:35:58.738531] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:55.056 [2024-07-25 10:35:58.738600] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:55.056 [2024-07-25 10:35:58.738612] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14b98e0 name raid_bdev1, state offline 00:20:55.314 [2024-07-25 10:35:58.824418] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:55.573 10:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:20:55.573 00:20:55.573 real 0m25.610s 00:20:55.573 user 0m47.389s 00:20:55.573 sys 0m3.528s 00:20:55.573 10:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:55.573 10:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.573 ************************************ 00:20:55.573 END TEST raid_superblock_test 00:20:55.573 ************************************ 00:20:55.573 10:35:59 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:20:55.573 10:35:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:55.573 10:35:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:55.573 10:35:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:55.831 ************************************ 00:20:55.831 START TEST raid_read_error_test 00:20:55.831 ************************************ 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ebYtdqapQP 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2428168 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2428168 /var/tmp/spdk-raid.sock 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2428168 ']' 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:55.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.832 10:35:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.832 [2024-07-25 10:35:59.349665] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:55.832 [2024-07-25 10:35:59.349734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428168 ] 00:20:55.832 [2024-07-25 10:35:59.425724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.832 [2024-07-25 10:35:59.536698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.090 [2024-07-25 10:35:59.612394] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.090 [2024-07-25 10:35:59.612439] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.656 10:36:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:56.656 10:36:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:20:56.656 10:36:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:56.656 10:36:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:56.914 BaseBdev1_malloc 00:20:56.914 10:36:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:57.173 true 00:20:57.173 10:36:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:57.431 [2024-07-25 10:36:01.094163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:57.431 [2024-07-25 10:36:01.094218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.431 [2024-07-25 10:36:01.094246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23e9250 00:20:57.431 [2024-07-25 10:36:01.094262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.431 [2024-07-25 10:36:01.096182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.431 [2024-07-25 10:36:01.096211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:57.431 BaseBdev1 00:20:57.431 10:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:57.431 10:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:57.688 BaseBdev2_malloc 00:20:57.688 10:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:57.946 true 00:20:57.946 10:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:58.204 [2024-07-25 10:36:01.859548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:58.204 [2024-07-25 10:36:01.859618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.204 [2024-07-25 10:36:01.859644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23d8650 00:20:58.204 [2024-07-25 10:36:01.859658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.204 [2024-07-25 10:36:01.861249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.204 [2024-07-25 10:36:01.861272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:58.204 BaseBdev2 00:20:58.204 10:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:58.204 10:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:58.462 BaseBdev3_malloc 00:20:58.462 10:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:58.721 true 00:20:58.721 10:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:58.979 [2024-07-25 10:36:02.619869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:58.979 [2024-07-25 10:36:02.619932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.979 [2024-07-25 10:36:02.619956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23ce5d0 00:20:58.979 [2024-07-25 10:36:02.619969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.979 [2024-07-25 10:36:02.621555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.979 [2024-07-25 10:36:02.621583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:58.979 BaseBdev3 00:20:58.979 10:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:58.979 10:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:59.237 BaseBdev4_malloc 00:20:59.237 10:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:59.495 true 00:20:59.495 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:59.754 [2024-07-25 10:36:03.369960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:59.754 [2024-07-25 10:36:03.370023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.754 [2024-07-25 10:36:03.370048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x222ed10 00:20:59.754 [2024-07-25 10:36:03.370061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.754 [2024-07-25 10:36:03.371642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.754 [2024-07-25 10:36:03.371667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:59.755 BaseBdev4 00:20:59.755 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:21:00.016 [2024-07-25 10:36:03.618653] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:00.016 [2024-07-25 10:36:03.619936] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:00.016 [2024-07-25 10:36:03.620003] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:00.016 [2024-07-25 10:36:03.620066] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:00.016 [2024-07-25 10:36:03.620331] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x22308c0 00:21:00.016 [2024-07-25 10:36:03.620346] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:00.016 [2024-07-25 10:36:03.620559] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2231240 00:21:00.016 [2024-07-25 10:36:03.620727] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22308c0 00:21:00.016 [2024-07-25 10:36:03.620740] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22308c0 00:21:00.016 [2024-07-25 10:36:03.620863] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.016 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:00.016 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:00.016 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:00.016 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:00.016 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:00.016 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:00.016 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:00.016 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:00.016 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:00.016 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:00.016 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.016 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.273 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:00.273 "name": "raid_bdev1", 00:21:00.273 "uuid": "628281d3-40f3-4379-a23c-171cd8444c22", 00:21:00.273 "strip_size_kb": 0, 00:21:00.273 "state": "online", 00:21:00.273 "raid_level": "raid1", 00:21:00.273 "superblock": true, 00:21:00.273 "num_base_bdevs": 4, 00:21:00.273 "num_base_bdevs_discovered": 4, 00:21:00.273 "num_base_bdevs_operational": 4, 00:21:00.273 "base_bdevs_list": [ 00:21:00.273 { 00:21:00.273 "name": "BaseBdev1", 00:21:00.273 "uuid": "9adb6ffb-cd51-59c8-a00e-816148f2f8ed", 00:21:00.273 "is_configured": true, 00:21:00.273 "data_offset": 2048, 00:21:00.273 "data_size": 63488 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "name": "BaseBdev2", 00:21:00.273 "uuid": "bfacb980-d9e1-5988-8a9a-26419e6eee40", 00:21:00.273 "is_configured": true, 00:21:00.273 "data_offset": 2048, 00:21:00.273 "data_size": 63488 00:21:00.274 }, 00:21:00.274 { 00:21:00.274 "name": "BaseBdev3", 00:21:00.274 "uuid": "02113d0b-9a08-51e7-ba8f-17ef5f98997c", 00:21:00.274 "is_configured": true, 00:21:00.274 "data_offset": 2048, 00:21:00.274 "data_size": 63488 00:21:00.274 }, 00:21:00.274 { 00:21:00.274 "name": "BaseBdev4", 00:21:00.274 "uuid": "9a5e5948-1cfd-5467-a020-9737001299de", 00:21:00.274 "is_configured": true, 00:21:00.274 "data_offset": 2048, 00:21:00.274 "data_size": 63488 00:21:00.274 } 00:21:00.274 ] 00:21:00.274 }' 00:21:00.274 10:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:00.274 10:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.839 10:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:00.839 10:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:00.839 [2024-07-25 10:36:04.521478] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2234880 00:21:01.772 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.061 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.320 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.320 "name": "raid_bdev1", 00:21:02.320 "uuid": "628281d3-40f3-4379-a23c-171cd8444c22", 00:21:02.320 "strip_size_kb": 0, 00:21:02.320 "state": "online", 00:21:02.320 "raid_level": "raid1", 00:21:02.320 "superblock": true, 00:21:02.320 "num_base_bdevs": 4, 00:21:02.320 "num_base_bdevs_discovered": 4, 00:21:02.320 "num_base_bdevs_operational": 4, 00:21:02.320 "base_bdevs_list": [ 00:21:02.320 { 00:21:02.320 "name": "BaseBdev1", 00:21:02.320 "uuid": "9adb6ffb-cd51-59c8-a00e-816148f2f8ed", 00:21:02.320 "is_configured": true, 00:21:02.320 "data_offset": 2048, 00:21:02.320 "data_size": 63488 00:21:02.320 }, 00:21:02.320 { 00:21:02.320 "name": "BaseBdev2", 00:21:02.320 "uuid": "bfacb980-d9e1-5988-8a9a-26419e6eee40", 00:21:02.320 "is_configured": true, 00:21:02.320 "data_offset": 2048, 00:21:02.320 "data_size": 63488 00:21:02.320 }, 00:21:02.320 { 00:21:02.320 "name": "BaseBdev3", 00:21:02.320 "uuid": "02113d0b-9a08-51e7-ba8f-17ef5f98997c", 00:21:02.320 "is_configured": true, 00:21:02.320 "data_offset": 2048, 00:21:02.320 "data_size": 63488 00:21:02.320 }, 00:21:02.320 { 00:21:02.320 "name": "BaseBdev4", 00:21:02.320 "uuid": "9a5e5948-1cfd-5467-a020-9737001299de", 00:21:02.320 "is_configured": true, 00:21:02.320 "data_offset": 2048, 00:21:02.320 "data_size": 63488 00:21:02.320 } 00:21:02.320 ] 00:21:02.320 }' 00:21:02.320 10:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.320 10:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.888 10:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:03.146 [2024-07-25 10:36:06.757266] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:03.146 [2024-07-25 10:36:06.757299] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:03.146 [2024-07-25 10:36:06.760373] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:03.146 [2024-07-25 10:36:06.760417] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.146 [2024-07-25 10:36:06.760553] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:03.146 [2024-07-25 10:36:06.760570] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22308c0 name raid_bdev1, state offline 00:21:03.146 0 00:21:03.146 10:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2428168 00:21:03.146 10:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2428168 ']' 00:21:03.146 10:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2428168 00:21:03.146 10:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:21:03.146 10:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.146 10:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2428168 00:21:03.146 10:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:03.146 10:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:03.146 10:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2428168' 00:21:03.146 killing process with pid 2428168 00:21:03.146 10:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2428168 00:21:03.146 [2024-07-25 10:36:06.803271] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:03.146 10:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2428168 00:21:03.146 [2024-07-25 10:36:06.843239] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:03.711 10:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ebYtdqapQP 00:21:03.711 10:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:03.711 10:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:03.711 10:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:03.711 10:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:03.711 10:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:03.711 10:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:03.711 10:36:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:03.711 00:21:03.711 real 0m7.850s 00:21:03.711 user 0m12.760s 00:21:03.711 sys 0m1.108s 00:21:03.711 10:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:03.711 10:36:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.712 ************************************ 00:21:03.712 END TEST raid_read_error_test 00:21:03.712 ************************************ 00:21:03.712 10:36:07 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:21:03.712 10:36:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:03.712 10:36:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:03.712 10:36:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:03.712 ************************************ 00:21:03.712 START TEST raid_write_error_test 00:21:03.712 ************************************ 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.aYeg09AM2d 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2429315 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2429315 /var/tmp/spdk-raid.sock 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2429315 ']' 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:03.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:03.712 10:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.712 [2024-07-25 10:36:07.248403] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:21:03.712 [2024-07-25 10:36:07.248486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2429315 ] 00:21:03.712 [2024-07-25 10:36:07.331110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.970 [2024-07-25 10:36:07.443216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.970 [2024-07-25 10:36:07.521657] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:03.970 [2024-07-25 10:36:07.521691] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:03.970 10:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:03.970 10:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:21:03.970 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:03.970 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:04.227 BaseBdev1_malloc 00:21:04.227 10:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:04.485 true 00:21:04.485 10:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:04.742 [2024-07-25 10:36:08.334343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:04.742 [2024-07-25 10:36:08.334399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.742 [2024-07-25 10:36:08.334423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eef250 00:21:04.742 [2024-07-25 10:36:08.334438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.742 [2024-07-25 10:36:08.336214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.742 [2024-07-25 10:36:08.336242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:04.742 BaseBdev1 00:21:04.742 10:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:04.742 10:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:05.000 BaseBdev2_malloc 00:21:05.000 10:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:05.258 true 00:21:05.258 10:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:05.515 [2024-07-25 10:36:09.111386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:05.515 [2024-07-25 10:36:09.111482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.515 [2024-07-25 10:36:09.111508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ede650 00:21:05.515 [2024-07-25 10:36:09.111535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.515 [2024-07-25 10:36:09.113165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.515 [2024-07-25 10:36:09.113189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:05.515 BaseBdev2 00:21:05.515 10:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:05.515 10:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:05.773 BaseBdev3_malloc 00:21:05.773 10:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:06.031 true 00:21:06.031 10:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:06.295 [2024-07-25 10:36:09.839168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:06.295 [2024-07-25 10:36:09.839217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.295 [2024-07-25 10:36:09.839240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ed45d0 00:21:06.295 [2024-07-25 10:36:09.839254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.295 [2024-07-25 10:36:09.840669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.295 [2024-07-25 10:36:09.840697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:06.295 BaseBdev3 00:21:06.295 10:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:06.295 10:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:06.552 BaseBdev4_malloc 00:21:06.552 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:21:06.809 true 00:21:06.809 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:07.067 [2024-07-25 10:36:10.604655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:07.067 [2024-07-25 10:36:10.604731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.067 [2024-07-25 10:36:10.604758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d34d10 00:21:07.067 [2024-07-25 10:36:10.604771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.067 [2024-07-25 10:36:10.606404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.067 [2024-07-25 10:36:10.606442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:07.067 BaseBdev4 00:21:07.067 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:21:07.325 [2024-07-25 10:36:10.833319] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:07.325 [2024-07-25 10:36:10.834602] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:07.325 [2024-07-25 10:36:10.834668] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:07.325 [2024-07-25 10:36:10.834731] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:07.325 [2024-07-25 10:36:10.834960] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d368c0 00:21:07.325 [2024-07-25 10:36:10.834974] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:07.325 [2024-07-25 10:36:10.835198] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d37240 00:21:07.325 [2024-07-25 10:36:10.835371] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d368c0 00:21:07.325 [2024-07-25 10:36:10.835390] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d368c0 00:21:07.325 [2024-07-25 10:36:10.835541] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.325 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:07.325 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:07.325 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:07.325 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:07.325 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:07.325 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:07.325 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:07.325 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:07.325 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:07.325 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:07.325 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.325 10:36:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.583 10:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.583 "name": "raid_bdev1", 00:21:07.583 "uuid": "6b819c1d-2a56-43c5-8e70-db6bce265117", 00:21:07.583 "strip_size_kb": 0, 00:21:07.583 "state": "online", 00:21:07.583 "raid_level": "raid1", 00:21:07.583 "superblock": true, 00:21:07.583 "num_base_bdevs": 4, 00:21:07.583 "num_base_bdevs_discovered": 4, 00:21:07.583 "num_base_bdevs_operational": 4, 00:21:07.583 "base_bdevs_list": [ 00:21:07.583 { 00:21:07.583 "name": "BaseBdev1", 00:21:07.583 "uuid": "c1c6c32a-b58c-5b3f-b42b-d028be94e7b6", 00:21:07.583 "is_configured": true, 00:21:07.583 "data_offset": 2048, 00:21:07.583 "data_size": 63488 00:21:07.583 }, 00:21:07.583 { 00:21:07.583 "name": "BaseBdev2", 00:21:07.583 "uuid": "73ce85e4-bd7b-5f4c-8690-41641b56bf0c", 00:21:07.583 "is_configured": true, 00:21:07.583 "data_offset": 2048, 00:21:07.583 "data_size": 63488 00:21:07.583 }, 00:21:07.583 { 00:21:07.583 "name": "BaseBdev3", 00:21:07.583 "uuid": "baace2c6-5fce-58ee-87f6-81ea2c3832ac", 00:21:07.583 "is_configured": true, 00:21:07.583 "data_offset": 2048, 00:21:07.583 "data_size": 63488 00:21:07.583 }, 00:21:07.583 { 00:21:07.583 "name": "BaseBdev4", 00:21:07.583 "uuid": "9ed52bed-80e7-5bba-ac9f-5f1552b6d696", 00:21:07.583 "is_configured": true, 00:21:07.583 "data_offset": 2048, 00:21:07.583 "data_size": 63488 00:21:07.583 } 00:21:07.583 ] 00:21:07.583 }' 00:21:07.583 10:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.583 10:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.149 10:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:08.149 10:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:08.149 [2024-07-25 10:36:11.732184] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d3a880 00:21:09.083 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:09.341 [2024-07-25 10:36:12.876532] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:21:09.341 [2024-07-25 10:36:12.876598] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:09.341 [2024-07-25 10:36:12.876838] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1d3a880 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.341 10:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.599 10:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.599 "name": "raid_bdev1", 00:21:09.599 "uuid": "6b819c1d-2a56-43c5-8e70-db6bce265117", 00:21:09.599 "strip_size_kb": 0, 00:21:09.599 "state": "online", 00:21:09.599 "raid_level": "raid1", 00:21:09.599 "superblock": true, 00:21:09.599 "num_base_bdevs": 4, 00:21:09.599 "num_base_bdevs_discovered": 3, 00:21:09.599 "num_base_bdevs_operational": 3, 00:21:09.599 "base_bdevs_list": [ 00:21:09.599 { 00:21:09.599 "name": null, 00:21:09.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.599 "is_configured": false, 00:21:09.599 "data_offset": 2048, 00:21:09.599 "data_size": 63488 00:21:09.599 }, 00:21:09.599 { 00:21:09.599 "name": "BaseBdev2", 00:21:09.599 "uuid": "73ce85e4-bd7b-5f4c-8690-41641b56bf0c", 00:21:09.599 "is_configured": true, 00:21:09.599 "data_offset": 2048, 00:21:09.599 "data_size": 63488 00:21:09.599 }, 00:21:09.599 { 00:21:09.599 "name": "BaseBdev3", 00:21:09.599 "uuid": "baace2c6-5fce-58ee-87f6-81ea2c3832ac", 00:21:09.599 "is_configured": true, 00:21:09.599 "data_offset": 2048, 00:21:09.599 "data_size": 63488 00:21:09.599 }, 00:21:09.599 { 00:21:09.599 "name": "BaseBdev4", 00:21:09.599 "uuid": "9ed52bed-80e7-5bba-ac9f-5f1552b6d696", 00:21:09.599 "is_configured": true, 00:21:09.599 "data_offset": 2048, 00:21:09.599 "data_size": 63488 00:21:09.600 } 00:21:09.600 ] 00:21:09.600 }' 00:21:09.600 10:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.600 10:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.165 10:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:10.422 [2024-07-25 10:36:13.908186] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:10.422 [2024-07-25 10:36:13.908244] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:10.422 [2024-07-25 10:36:13.911225] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:10.422 [2024-07-25 10:36:13.911271] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.422 [2024-07-25 10:36:13.911369] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:10.422 [2024-07-25 10:36:13.911385] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d368c0 name raid_bdev1, state offline 00:21:10.422 0 00:21:10.422 10:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2429315 00:21:10.422 10:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2429315 ']' 00:21:10.422 10:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2429315 00:21:10.422 10:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:21:10.422 10:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:10.422 10:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2429315 00:21:10.422 10:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:10.422 10:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:10.422 10:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2429315' 00:21:10.422 killing process with pid 2429315 00:21:10.422 10:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2429315 00:21:10.422 [2024-07-25 10:36:13.959756] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:10.422 10:36:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2429315 00:21:10.422 [2024-07-25 10:36:14.004020] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:10.680 10:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.aYeg09AM2d 00:21:10.680 10:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:10.680 10:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:10.680 10:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:10.680 10:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:10.680 10:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:10.680 10:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:10.680 10:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:10.680 00:21:10.680 real 0m7.113s 00:21:10.680 user 0m11.704s 00:21:10.680 sys 0m1.071s 00:21:10.680 10:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:10.680 10:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.680 ************************************ 00:21:10.680 END TEST raid_write_error_test 00:21:10.680 ************************************ 00:21:10.680 10:36:14 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:21:10.680 10:36:14 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:21:10.680 10:36:14 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:21:10.680 10:36:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:10.680 10:36:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:10.680 10:36:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:10.680 ************************************ 00:21:10.680 START TEST raid_rebuild_test 00:21:10.680 ************************************ 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2430714 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2430714 /var/tmp/spdk-raid.sock 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 2430714 ']' 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:10.680 10:36:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:10.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:10.681 10:36:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:10.681 10:36:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.938 [2024-07-25 10:36:14.412296] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:21:10.938 [2024-07-25 10:36:14.412376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2430714 ] 00:21:10.938 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:10.938 Zero copy mechanism will not be used. 00:21:10.938 [2024-07-25 10:36:14.500082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.938 [2024-07-25 10:36:14.623182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.196 [2024-07-25 10:36:14.697551] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.196 [2024-07-25 10:36:14.697588] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.196 10:36:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.196 10:36:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:21:11.196 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:11.196 10:36:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:11.454 BaseBdev1_malloc 00:21:11.454 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:11.712 [2024-07-25 10:36:15.280257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:11.713 [2024-07-25 10:36:15.280327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.713 [2024-07-25 10:36:15.280361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12c1430 00:21:11.713 [2024-07-25 10:36:15.280377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.713 [2024-07-25 10:36:15.282271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.713 [2024-07-25 10:36:15.282300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:11.713 BaseBdev1 00:21:11.713 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:11.713 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:11.970 BaseBdev2_malloc 00:21:11.970 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:12.228 [2024-07-25 10:36:15.821781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:12.228 [2024-07-25 10:36:15.821838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.228 [2024-07-25 10:36:15.821866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1464a20 00:21:12.228 [2024-07-25 10:36:15.821879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.228 [2024-07-25 10:36:15.823471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.228 [2024-07-25 10:36:15.823494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:12.228 BaseBdev2 00:21:12.228 10:36:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:12.485 spare_malloc 00:21:12.485 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:12.743 spare_delay 00:21:12.743 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:13.001 [2024-07-25 10:36:16.569907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:13.001 [2024-07-25 10:36:16.569980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.001 [2024-07-25 10:36:16.570010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12b9070 00:21:13.001 [2024-07-25 10:36:16.570023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.001 [2024-07-25 10:36:16.571605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.001 [2024-07-25 10:36:16.571627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:13.001 spare 00:21:13.001 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:13.259 [2024-07-25 10:36:16.818643] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:13.259 [2024-07-25 10:36:16.819911] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:13.259 [2024-07-25 10:36:16.819998] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x145c010 00:21:13.259 [2024-07-25 10:36:16.820011] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:13.259 [2024-07-25 10:36:16.820247] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12bc4c0 00:21:13.259 [2024-07-25 10:36:16.820434] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x145c010 00:21:13.259 [2024-07-25 10:36:16.820447] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x145c010 00:21:13.259 [2024-07-25 10:36:16.820591] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.259 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:13.259 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:13.259 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:13.259 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:13.259 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:13.259 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:13.259 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:13.259 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:13.259 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:13.259 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:13.259 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.259 10:36:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.517 10:36:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:13.517 "name": "raid_bdev1", 00:21:13.517 "uuid": "d6de6783-6322-4bd6-86aa-896de1a7c28a", 00:21:13.517 "strip_size_kb": 0, 00:21:13.517 "state": "online", 00:21:13.517 "raid_level": "raid1", 00:21:13.517 "superblock": false, 00:21:13.517 "num_base_bdevs": 2, 00:21:13.517 "num_base_bdevs_discovered": 2, 00:21:13.517 "num_base_bdevs_operational": 2, 00:21:13.517 "base_bdevs_list": [ 00:21:13.517 { 00:21:13.517 "name": "BaseBdev1", 00:21:13.517 "uuid": "cff44694-7b66-5166-95af-62d04cfe9c90", 00:21:13.517 "is_configured": true, 00:21:13.517 "data_offset": 0, 00:21:13.517 "data_size": 65536 00:21:13.517 }, 00:21:13.517 { 00:21:13.517 "name": "BaseBdev2", 00:21:13.517 "uuid": "be6d7919-36cc-5159-81a1-0d8c351b0c81", 00:21:13.517 "is_configured": true, 00:21:13.517 "data_offset": 0, 00:21:13.517 "data_size": 65536 00:21:13.517 } 00:21:13.517 ] 00:21:13.517 }' 00:21:13.517 10:36:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:13.517 10:36:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.082 10:36:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:14.082 10:36:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:21:14.341 [2024-07-25 10:36:17.929752] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.341 10:36:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:21:14.341 10:36:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.341 10:36:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:14.598 10:36:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:14.855 [2024-07-25 10:36:18.511142] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12b8a50 00:21:14.855 /dev/nbd0 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:14.855 1+0 records in 00:21:14.855 1+0 records out 00:21:14.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198876 s, 20.6 MB/s 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:14.855 10:36:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:21:15.112 10:36:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.112 10:36:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.112 10:36:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:21:15.112 10:36:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:21:15.112 10:36:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:21.664 65536+0 records in 00:21:21.664 65536+0 records out 00:21:21.664 33554432 bytes (34 MB, 32 MiB) copied, 5.61083 s, 6.0 MB/s 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:21.664 [2024-07-25 10:36:24.459683] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.664 [2024-07-25 10:36:24.671953] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.664 "name": "raid_bdev1", 00:21:21.664 "uuid": "d6de6783-6322-4bd6-86aa-896de1a7c28a", 00:21:21.664 "strip_size_kb": 0, 00:21:21.664 "state": "online", 00:21:21.664 "raid_level": "raid1", 00:21:21.664 "superblock": false, 00:21:21.664 "num_base_bdevs": 2, 00:21:21.664 "num_base_bdevs_discovered": 1, 00:21:21.664 "num_base_bdevs_operational": 1, 00:21:21.664 "base_bdevs_list": [ 00:21:21.664 { 00:21:21.664 "name": null, 00:21:21.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.664 "is_configured": false, 00:21:21.664 "data_offset": 0, 00:21:21.664 "data_size": 65536 00:21:21.664 }, 00:21:21.664 { 00:21:21.664 "name": "BaseBdev2", 00:21:21.664 "uuid": "be6d7919-36cc-5159-81a1-0d8c351b0c81", 00:21:21.664 "is_configured": true, 00:21:21.664 "data_offset": 0, 00:21:21.664 "data_size": 65536 00:21:21.664 } 00:21:21.664 ] 00:21:21.664 }' 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.664 10:36:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.921 10:36:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:22.179 [2024-07-25 10:36:25.754958] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:22.179 [2024-07-25 10:36:25.761296] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12b8a50 00:21:22.179 [2024-07-25 10:36:25.763481] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:22.179 10:36:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:21:23.112 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.112 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:23.112 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:23.112 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:23.112 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:23.112 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.112 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.370 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:23.370 "name": "raid_bdev1", 00:21:23.370 "uuid": "d6de6783-6322-4bd6-86aa-896de1a7c28a", 00:21:23.370 "strip_size_kb": 0, 00:21:23.370 "state": "online", 00:21:23.370 "raid_level": "raid1", 00:21:23.370 "superblock": false, 00:21:23.370 "num_base_bdevs": 2, 00:21:23.370 "num_base_bdevs_discovered": 2, 00:21:23.370 "num_base_bdevs_operational": 2, 00:21:23.370 "process": { 00:21:23.370 "type": "rebuild", 00:21:23.370 "target": "spare", 00:21:23.370 "progress": { 00:21:23.370 "blocks": 24576, 00:21:23.370 "percent": 37 00:21:23.370 } 00:21:23.370 }, 00:21:23.370 "base_bdevs_list": [ 00:21:23.370 { 00:21:23.370 "name": "spare", 00:21:23.370 "uuid": "a3a8f698-e8bc-5c45-aee4-a6f8209674e5", 00:21:23.370 "is_configured": true, 00:21:23.370 "data_offset": 0, 00:21:23.370 "data_size": 65536 00:21:23.370 }, 00:21:23.370 { 00:21:23.370 "name": "BaseBdev2", 00:21:23.370 "uuid": "be6d7919-36cc-5159-81a1-0d8c351b0c81", 00:21:23.370 "is_configured": true, 00:21:23.370 "data_offset": 0, 00:21:23.370 "data_size": 65536 00:21:23.370 } 00:21:23.370 ] 00:21:23.370 }' 00:21:23.370 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:23.370 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.370 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:23.653 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.653 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:23.938 [2024-07-25 10:36:27.377986] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.938 [2024-07-25 10:36:27.478037] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:23.938 [2024-07-25 10:36:27.478094] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.938 [2024-07-25 10:36:27.478124] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.938 [2024-07-25 10:36:27.478136] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:23.938 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:23.938 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:23.938 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:23.938 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:23.939 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:23.939 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:23.939 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:23.939 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:23.939 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:23.939 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:23.939 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.939 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.196 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:24.196 "name": "raid_bdev1", 00:21:24.196 "uuid": "d6de6783-6322-4bd6-86aa-896de1a7c28a", 00:21:24.196 "strip_size_kb": 0, 00:21:24.196 "state": "online", 00:21:24.196 "raid_level": "raid1", 00:21:24.196 "superblock": false, 00:21:24.196 "num_base_bdevs": 2, 00:21:24.196 "num_base_bdevs_discovered": 1, 00:21:24.196 "num_base_bdevs_operational": 1, 00:21:24.196 "base_bdevs_list": [ 00:21:24.196 { 00:21:24.196 "name": null, 00:21:24.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.196 "is_configured": false, 00:21:24.196 "data_offset": 0, 00:21:24.196 "data_size": 65536 00:21:24.196 }, 00:21:24.196 { 00:21:24.196 "name": "BaseBdev2", 00:21:24.196 "uuid": "be6d7919-36cc-5159-81a1-0d8c351b0c81", 00:21:24.196 "is_configured": true, 00:21:24.196 "data_offset": 0, 00:21:24.196 "data_size": 65536 00:21:24.196 } 00:21:24.196 ] 00:21:24.196 }' 00:21:24.196 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:24.196 10:36:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.761 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:24.761 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:24.761 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:24.761 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:24.761 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:24.761 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.761 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.019 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:25.019 "name": "raid_bdev1", 00:21:25.019 "uuid": "d6de6783-6322-4bd6-86aa-896de1a7c28a", 00:21:25.019 "strip_size_kb": 0, 00:21:25.019 "state": "online", 00:21:25.019 "raid_level": "raid1", 00:21:25.019 "superblock": false, 00:21:25.019 "num_base_bdevs": 2, 00:21:25.019 "num_base_bdevs_discovered": 1, 00:21:25.019 "num_base_bdevs_operational": 1, 00:21:25.019 "base_bdevs_list": [ 00:21:25.019 { 00:21:25.019 "name": null, 00:21:25.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.019 "is_configured": false, 00:21:25.019 "data_offset": 0, 00:21:25.019 "data_size": 65536 00:21:25.019 }, 00:21:25.019 { 00:21:25.019 "name": "BaseBdev2", 00:21:25.019 "uuid": "be6d7919-36cc-5159-81a1-0d8c351b0c81", 00:21:25.019 "is_configured": true, 00:21:25.019 "data_offset": 0, 00:21:25.019 "data_size": 65536 00:21:25.019 } 00:21:25.019 ] 00:21:25.019 }' 00:21:25.019 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:25.019 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:25.019 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:25.019 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:25.019 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:25.277 [2024-07-25 10:36:28.875984] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:25.277 [2024-07-25 10:36:28.882386] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12b8db0 00:21:25.277 [2024-07-25 10:36:28.883972] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:25.277 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:26.210 10:36:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.210 10:36:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:26.210 10:36:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:26.210 10:36:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:26.210 10:36:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:26.210 10:36:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.210 10:36:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.467 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:26.467 "name": "raid_bdev1", 00:21:26.467 "uuid": "d6de6783-6322-4bd6-86aa-896de1a7c28a", 00:21:26.467 "strip_size_kb": 0, 00:21:26.467 "state": "online", 00:21:26.467 "raid_level": "raid1", 00:21:26.467 "superblock": false, 00:21:26.467 "num_base_bdevs": 2, 00:21:26.467 "num_base_bdevs_discovered": 2, 00:21:26.467 "num_base_bdevs_operational": 2, 00:21:26.467 "process": { 00:21:26.467 "type": "rebuild", 00:21:26.467 "target": "spare", 00:21:26.467 "progress": { 00:21:26.467 "blocks": 24576, 00:21:26.467 "percent": 37 00:21:26.467 } 00:21:26.467 }, 00:21:26.467 "base_bdevs_list": [ 00:21:26.467 { 00:21:26.467 "name": "spare", 00:21:26.467 "uuid": "a3a8f698-e8bc-5c45-aee4-a6f8209674e5", 00:21:26.468 "is_configured": true, 00:21:26.468 "data_offset": 0, 00:21:26.468 "data_size": 65536 00:21:26.468 }, 00:21:26.468 { 00:21:26.468 "name": "BaseBdev2", 00:21:26.468 "uuid": "be6d7919-36cc-5159-81a1-0d8c351b0c81", 00:21:26.468 "is_configured": true, 00:21:26.468 "data_offset": 0, 00:21:26.468 "data_size": 65536 00:21:26.468 } 00:21:26.468 ] 00:21:26.468 }' 00:21:26.468 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=754 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.725 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.983 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:26.983 "name": "raid_bdev1", 00:21:26.983 "uuid": "d6de6783-6322-4bd6-86aa-896de1a7c28a", 00:21:26.983 "strip_size_kb": 0, 00:21:26.983 "state": "online", 00:21:26.983 "raid_level": "raid1", 00:21:26.983 "superblock": false, 00:21:26.983 "num_base_bdevs": 2, 00:21:26.983 "num_base_bdevs_discovered": 2, 00:21:26.983 "num_base_bdevs_operational": 2, 00:21:26.983 "process": { 00:21:26.983 "type": "rebuild", 00:21:26.983 "target": "spare", 00:21:26.983 "progress": { 00:21:26.983 "blocks": 30720, 00:21:26.983 "percent": 46 00:21:26.983 } 00:21:26.983 }, 00:21:26.983 "base_bdevs_list": [ 00:21:26.983 { 00:21:26.983 "name": "spare", 00:21:26.983 "uuid": "a3a8f698-e8bc-5c45-aee4-a6f8209674e5", 00:21:26.983 "is_configured": true, 00:21:26.983 "data_offset": 0, 00:21:26.983 "data_size": 65536 00:21:26.983 }, 00:21:26.983 { 00:21:26.983 "name": "BaseBdev2", 00:21:26.983 "uuid": "be6d7919-36cc-5159-81a1-0d8c351b0c81", 00:21:26.983 "is_configured": true, 00:21:26.983 "data_offset": 0, 00:21:26.983 "data_size": 65536 00:21:26.983 } 00:21:26.983 ] 00:21:26.983 }' 00:21:26.983 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:26.983 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.983 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:26.983 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.983 10:36:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:27.915 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:27.915 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.915 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:27.915 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:27.915 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:27.915 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:27.915 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.915 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.172 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:28.172 "name": "raid_bdev1", 00:21:28.172 "uuid": "d6de6783-6322-4bd6-86aa-896de1a7c28a", 00:21:28.172 "strip_size_kb": 0, 00:21:28.172 "state": "online", 00:21:28.172 "raid_level": "raid1", 00:21:28.172 "superblock": false, 00:21:28.172 "num_base_bdevs": 2, 00:21:28.172 "num_base_bdevs_discovered": 2, 00:21:28.172 "num_base_bdevs_operational": 2, 00:21:28.172 "process": { 00:21:28.172 "type": "rebuild", 00:21:28.172 "target": "spare", 00:21:28.172 "progress": { 00:21:28.172 "blocks": 59392, 00:21:28.172 "percent": 90 00:21:28.172 } 00:21:28.172 }, 00:21:28.172 "base_bdevs_list": [ 00:21:28.172 { 00:21:28.172 "name": "spare", 00:21:28.172 "uuid": "a3a8f698-e8bc-5c45-aee4-a6f8209674e5", 00:21:28.172 "is_configured": true, 00:21:28.172 "data_offset": 0, 00:21:28.172 "data_size": 65536 00:21:28.172 }, 00:21:28.172 { 00:21:28.172 "name": "BaseBdev2", 00:21:28.172 "uuid": "be6d7919-36cc-5159-81a1-0d8c351b0c81", 00:21:28.172 "is_configured": true, 00:21:28.172 "data_offset": 0, 00:21:28.172 "data_size": 65536 00:21:28.172 } 00:21:28.172 ] 00:21:28.172 }' 00:21:28.172 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:28.430 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:28.430 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:28.430 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:28.430 10:36:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:28.430 [2024-07-25 10:36:32.110535] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:28.430 [2024-07-25 10:36:32.110597] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:28.430 [2024-07-25 10:36:32.110656] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.362 10:36:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:29.362 10:36:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.362 10:36:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:29.362 10:36:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:29.362 10:36:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:29.362 10:36:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:29.362 10:36:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.362 10:36:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.620 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:29.620 "name": "raid_bdev1", 00:21:29.620 "uuid": "d6de6783-6322-4bd6-86aa-896de1a7c28a", 00:21:29.620 "strip_size_kb": 0, 00:21:29.620 "state": "online", 00:21:29.620 "raid_level": "raid1", 00:21:29.620 "superblock": false, 00:21:29.620 "num_base_bdevs": 2, 00:21:29.620 "num_base_bdevs_discovered": 2, 00:21:29.620 "num_base_bdevs_operational": 2, 00:21:29.620 "base_bdevs_list": [ 00:21:29.620 { 00:21:29.620 "name": "spare", 00:21:29.620 "uuid": "a3a8f698-e8bc-5c45-aee4-a6f8209674e5", 00:21:29.620 "is_configured": true, 00:21:29.620 "data_offset": 0, 00:21:29.620 "data_size": 65536 00:21:29.620 }, 00:21:29.620 { 00:21:29.620 "name": "BaseBdev2", 00:21:29.620 "uuid": "be6d7919-36cc-5159-81a1-0d8c351b0c81", 00:21:29.620 "is_configured": true, 00:21:29.620 "data_offset": 0, 00:21:29.620 "data_size": 65536 00:21:29.620 } 00:21:29.620 ] 00:21:29.620 }' 00:21:29.620 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:29.620 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:29.620 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:29.620 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:29.620 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:21:29.620 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:29.621 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:29.621 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:29.621 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:29.621 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:29.621 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.621 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.879 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:29.879 "name": "raid_bdev1", 00:21:29.879 "uuid": "d6de6783-6322-4bd6-86aa-896de1a7c28a", 00:21:29.879 "strip_size_kb": 0, 00:21:29.879 "state": "online", 00:21:29.879 "raid_level": "raid1", 00:21:29.879 "superblock": false, 00:21:29.879 "num_base_bdevs": 2, 00:21:29.879 "num_base_bdevs_discovered": 2, 00:21:29.879 "num_base_bdevs_operational": 2, 00:21:29.879 "base_bdevs_list": [ 00:21:29.879 { 00:21:29.879 "name": "spare", 00:21:29.879 "uuid": "a3a8f698-e8bc-5c45-aee4-a6f8209674e5", 00:21:29.879 "is_configured": true, 00:21:29.879 "data_offset": 0, 00:21:29.879 "data_size": 65536 00:21:29.879 }, 00:21:29.879 { 00:21:29.879 "name": "BaseBdev2", 00:21:29.879 "uuid": "be6d7919-36cc-5159-81a1-0d8c351b0c81", 00:21:29.879 "is_configured": true, 00:21:29.879 "data_offset": 0, 00:21:29.879 "data_size": 65536 00:21:29.879 } 00:21:29.879 ] 00:21:29.879 }' 00:21:29.879 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:29.879 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:29.879 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:30.136 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:30.136 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:30.136 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:30.136 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:30.136 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:30.136 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:30.136 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:30.136 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:30.136 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:30.136 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:30.136 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:30.136 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.136 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.394 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.394 "name": "raid_bdev1", 00:21:30.394 "uuid": "d6de6783-6322-4bd6-86aa-896de1a7c28a", 00:21:30.394 "strip_size_kb": 0, 00:21:30.394 "state": "online", 00:21:30.394 "raid_level": "raid1", 00:21:30.394 "superblock": false, 00:21:30.394 "num_base_bdevs": 2, 00:21:30.394 "num_base_bdevs_discovered": 2, 00:21:30.394 "num_base_bdevs_operational": 2, 00:21:30.394 "base_bdevs_list": [ 00:21:30.394 { 00:21:30.394 "name": "spare", 00:21:30.394 "uuid": "a3a8f698-e8bc-5c45-aee4-a6f8209674e5", 00:21:30.394 "is_configured": true, 00:21:30.394 "data_offset": 0, 00:21:30.394 "data_size": 65536 00:21:30.394 }, 00:21:30.394 { 00:21:30.394 "name": "BaseBdev2", 00:21:30.394 "uuid": "be6d7919-36cc-5159-81a1-0d8c351b0c81", 00:21:30.394 "is_configured": true, 00:21:30.394 "data_offset": 0, 00:21:30.394 "data_size": 65536 00:21:30.394 } 00:21:30.394 ] 00:21:30.394 }' 00:21:30.394 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.394 10:36:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.959 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:30.959 [2024-07-25 10:36:34.667545] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:30.959 [2024-07-25 10:36:34.667578] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:30.959 [2024-07-25 10:36:34.667653] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:30.959 [2024-07-25 10:36:34.667727] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:30.959 [2024-07-25 10:36:34.667743] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x145c010 name raid_bdev1, state offline 00:21:31.216 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.216 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:21:31.474 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:21:31.474 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:21:31.474 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:21:31.474 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:31.474 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:31.474 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:31.474 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:31.474 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:31.474 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:31.474 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:31.474 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:31.474 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:31.474 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:31.731 /dev/nbd0 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:31.731 1+0 records in 00:21:31.731 1+0 records out 00:21:31.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187403 s, 21.9 MB/s 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:31.731 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:31.989 /dev/nbd1 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:31.989 1+0 records in 00:21:31.989 1+0 records out 00:21:31.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268723 s, 15.2 MB/s 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:31.989 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:32.246 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:32.246 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:32.246 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:32.246 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:32.246 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:32.246 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:32.246 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:32.246 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:32.246 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:32.246 10:36:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2430714 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 2430714 ']' 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 2430714 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2430714 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2430714' 00:21:32.504 killing process with pid 2430714 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 2430714 00:21:32.504 Received shutdown signal, test time was about 60.000000 seconds 00:21:32.504 00:21:32.504 Latency(us) 00:21:32.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.504 =================================================================================================================== 00:21:32.504 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:32.504 [2024-07-25 10:36:36.205028] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:32.504 10:36:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 2430714 00:21:32.761 [2024-07-25 10:36:36.241904] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:33.019 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:21:33.020 00:21:33.020 real 0m22.179s 00:21:33.020 user 0m30.155s 00:21:33.020 sys 0m4.312s 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.020 ************************************ 00:21:33.020 END TEST raid_rebuild_test 00:21:33.020 ************************************ 00:21:33.020 10:36:36 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:21:33.020 10:36:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:33.020 10:36:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:33.020 10:36:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:33.020 ************************************ 00:21:33.020 START TEST raid_rebuild_test_sb 00:21:33.020 ************************************ 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2433608 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2433608 /var/tmp/spdk-raid.sock 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2433608 ']' 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:33.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:33.020 10:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.020 [2024-07-25 10:36:36.638947] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:21:33.020 [2024-07-25 10:36:36.639032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433608 ] 00:21:33.020 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:33.020 Zero copy mechanism will not be used. 00:21:33.020 [2024-07-25 10:36:36.721316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.278 [2024-07-25 10:36:36.838577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.278 [2024-07-25 10:36:36.914097] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:33.278 [2024-07-25 10:36:36.914135] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:34.210 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:34.210 10:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:21:34.210 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:34.210 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:34.210 BaseBdev1_malloc 00:21:34.210 10:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:34.777 [2024-07-25 10:36:38.178731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:34.777 [2024-07-25 10:36:38.178783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.777 [2024-07-25 10:36:38.178808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2440430 00:21:34.777 [2024-07-25 10:36:38.178825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.777 [2024-07-25 10:36:38.180356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.777 [2024-07-25 10:36:38.180384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:34.777 BaseBdev1 00:21:34.777 10:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:34.777 10:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:34.777 BaseBdev2_malloc 00:21:34.777 10:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:35.034 [2024-07-25 10:36:38.699624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:35.034 [2024-07-25 10:36:38.699691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.035 [2024-07-25 10:36:38.699732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25e3a20 00:21:35.035 [2024-07-25 10:36:38.699751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.035 [2024-07-25 10:36:38.701490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.035 [2024-07-25 10:36:38.701519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:35.035 BaseBdev2 00:21:35.035 10:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:35.294 spare_malloc 00:21:35.294 10:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:35.551 spare_delay 00:21:35.551 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:35.809 [2024-07-25 10:36:39.456680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:35.809 [2024-07-25 10:36:39.456756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.809 [2024-07-25 10:36:39.456787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2438070 00:21:35.809 [2024-07-25 10:36:39.456801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.809 [2024-07-25 10:36:39.458485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.809 [2024-07-25 10:36:39.458507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:35.809 spare 00:21:35.809 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:36.068 [2024-07-25 10:36:39.705381] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:36.068 [2024-07-25 10:36:39.706710] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:36.068 [2024-07-25 10:36:39.706927] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x25db010 00:21:36.068 [2024-07-25 10:36:39.706946] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:36.068 [2024-07-25 10:36:39.707194] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2437d20 00:21:36.068 [2024-07-25 10:36:39.707383] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25db010 00:21:36.068 [2024-07-25 10:36:39.707400] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25db010 00:21:36.068 [2024-07-25 10:36:39.707546] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.068 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:36.068 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:36.068 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:36.068 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:36.068 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:36.068 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:36.068 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:36.068 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:36.068 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:36.068 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:36.068 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.068 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.326 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:36.326 "name": "raid_bdev1", 00:21:36.326 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:36.326 "strip_size_kb": 0, 00:21:36.326 "state": "online", 00:21:36.326 "raid_level": "raid1", 00:21:36.326 "superblock": true, 00:21:36.326 "num_base_bdevs": 2, 00:21:36.326 "num_base_bdevs_discovered": 2, 00:21:36.326 "num_base_bdevs_operational": 2, 00:21:36.326 "base_bdevs_list": [ 00:21:36.326 { 00:21:36.326 "name": "BaseBdev1", 00:21:36.326 "uuid": "6c605de6-7715-5264-85b1-93fab207c4bf", 00:21:36.326 "is_configured": true, 00:21:36.326 "data_offset": 2048, 00:21:36.326 "data_size": 63488 00:21:36.327 }, 00:21:36.327 { 00:21:36.327 "name": "BaseBdev2", 00:21:36.327 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:36.327 "is_configured": true, 00:21:36.327 "data_offset": 2048, 00:21:36.327 "data_size": 63488 00:21:36.327 } 00:21:36.327 ] 00:21:36.327 }' 00:21:36.327 10:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:36.327 10:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.891 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:36.892 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:21:37.149 [2024-07-25 10:36:40.796523] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:37.149 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:21:37.149 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.149 10:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:37.407 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:37.665 [2024-07-25 10:36:41.337813] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25da090 00:21:37.665 /dev/nbd0 00:21:37.665 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:37.665 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:37.665 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:37.665 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:21:37.665 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:37.665 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:37.665 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:37.665 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:21:37.665 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:37.665 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:37.665 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.922 1+0 records in 00:21:37.922 1+0 records out 00:21:37.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204351 s, 20.0 MB/s 00:21:37.922 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:37.922 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:21:37.922 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:37.922 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:37.922 10:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:21:37.922 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:37.922 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:37.922 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:21:37.922 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:21:37.922 10:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:44.531 63488+0 records in 00:21:44.531 63488+0 records out 00:21:44.531 32505856 bytes (33 MB, 31 MiB) copied, 6.17409 s, 5.3 MB/s 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:44.531 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:44.531 [2024-07-25 10:36:47.899287] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.531 [2024-07-25 10:36:48.103014] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:44.531 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:44.531 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:44.531 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:44.531 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:44.531 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:44.531 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:44.531 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.531 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.531 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.531 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.531 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.531 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.790 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.790 "name": "raid_bdev1", 00:21:44.790 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:44.790 "strip_size_kb": 0, 00:21:44.790 "state": "online", 00:21:44.790 "raid_level": "raid1", 00:21:44.790 "superblock": true, 00:21:44.790 "num_base_bdevs": 2, 00:21:44.790 "num_base_bdevs_discovered": 1, 00:21:44.790 "num_base_bdevs_operational": 1, 00:21:44.790 "base_bdevs_list": [ 00:21:44.790 { 00:21:44.790 "name": null, 00:21:44.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.790 "is_configured": false, 00:21:44.790 "data_offset": 2048, 00:21:44.790 "data_size": 63488 00:21:44.790 }, 00:21:44.790 { 00:21:44.790 "name": "BaseBdev2", 00:21:44.790 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:44.790 "is_configured": true, 00:21:44.790 "data_offset": 2048, 00:21:44.790 "data_size": 63488 00:21:44.790 } 00:21:44.790 ] 00:21:44.790 }' 00:21:44.790 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.790 10:36:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.355 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:45.615 [2024-07-25 10:36:49.145807] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:45.615 [2024-07-25 10:36:49.152299] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25d9f70 00:21:45.615 [2024-07-25 10:36:49.154545] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:45.615 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:21:46.582 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:46.582 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:46.582 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:46.582 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:46.582 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:46.582 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.582 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.840 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:46.840 "name": "raid_bdev1", 00:21:46.840 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:46.840 "strip_size_kb": 0, 00:21:46.840 "state": "online", 00:21:46.840 "raid_level": "raid1", 00:21:46.840 "superblock": true, 00:21:46.841 "num_base_bdevs": 2, 00:21:46.841 "num_base_bdevs_discovered": 2, 00:21:46.841 "num_base_bdevs_operational": 2, 00:21:46.841 "process": { 00:21:46.841 "type": "rebuild", 00:21:46.841 "target": "spare", 00:21:46.841 "progress": { 00:21:46.841 "blocks": 24576, 00:21:46.841 "percent": 38 00:21:46.841 } 00:21:46.841 }, 00:21:46.841 "base_bdevs_list": [ 00:21:46.841 { 00:21:46.841 "name": "spare", 00:21:46.841 "uuid": "b67f68ad-72c2-50dd-bfba-4b79f990a587", 00:21:46.841 "is_configured": true, 00:21:46.841 "data_offset": 2048, 00:21:46.841 "data_size": 63488 00:21:46.841 }, 00:21:46.841 { 00:21:46.841 "name": "BaseBdev2", 00:21:46.841 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:46.841 "is_configured": true, 00:21:46.841 "data_offset": 2048, 00:21:46.841 "data_size": 63488 00:21:46.841 } 00:21:46.841 ] 00:21:46.841 }' 00:21:46.841 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:46.841 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:46.841 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:46.841 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:46.841 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:47.099 [2024-07-25 10:36:50.728672] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:47.099 [2024-07-25 10:36:50.767882] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:47.099 [2024-07-25 10:36:50.767938] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.099 [2024-07-25 10:36:50.767960] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:47.099 [2024-07-25 10:36:50.767970] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:47.099 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:47.099 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:47.099 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:47.099 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:47.099 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:47.099 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:47.099 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.099 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.099 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.099 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.099 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.099 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.357 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.357 "name": "raid_bdev1", 00:21:47.357 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:47.357 "strip_size_kb": 0, 00:21:47.357 "state": "online", 00:21:47.357 "raid_level": "raid1", 00:21:47.357 "superblock": true, 00:21:47.357 "num_base_bdevs": 2, 00:21:47.357 "num_base_bdevs_discovered": 1, 00:21:47.357 "num_base_bdevs_operational": 1, 00:21:47.357 "base_bdevs_list": [ 00:21:47.357 { 00:21:47.357 "name": null, 00:21:47.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.357 "is_configured": false, 00:21:47.357 "data_offset": 2048, 00:21:47.357 "data_size": 63488 00:21:47.357 }, 00:21:47.357 { 00:21:47.357 "name": "BaseBdev2", 00:21:47.357 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:47.357 "is_configured": true, 00:21:47.357 "data_offset": 2048, 00:21:47.357 "data_size": 63488 00:21:47.357 } 00:21:47.357 ] 00:21:47.357 }' 00:21:47.357 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.357 10:36:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.290 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:48.291 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:48.291 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:48.291 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:48.291 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:48.291 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.291 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.291 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:48.291 "name": "raid_bdev1", 00:21:48.291 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:48.291 "strip_size_kb": 0, 00:21:48.291 "state": "online", 00:21:48.291 "raid_level": "raid1", 00:21:48.291 "superblock": true, 00:21:48.291 "num_base_bdevs": 2, 00:21:48.291 "num_base_bdevs_discovered": 1, 00:21:48.291 "num_base_bdevs_operational": 1, 00:21:48.291 "base_bdevs_list": [ 00:21:48.291 { 00:21:48.291 "name": null, 00:21:48.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.291 "is_configured": false, 00:21:48.291 "data_offset": 2048, 00:21:48.291 "data_size": 63488 00:21:48.291 }, 00:21:48.291 { 00:21:48.291 "name": "BaseBdev2", 00:21:48.291 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:48.291 "is_configured": true, 00:21:48.291 "data_offset": 2048, 00:21:48.291 "data_size": 63488 00:21:48.291 } 00:21:48.291 ] 00:21:48.291 }' 00:21:48.291 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:48.291 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:48.291 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:48.291 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:48.291 10:36:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:48.549 [2024-07-25 10:36:52.197730] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:48.549 [2024-07-25 10:36:52.204773] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25d9f70 00:21:48.549 [2024-07-25 10:36:52.206359] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:48.549 10:36:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:49.922 "name": "raid_bdev1", 00:21:49.922 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:49.922 "strip_size_kb": 0, 00:21:49.922 "state": "online", 00:21:49.922 "raid_level": "raid1", 00:21:49.922 "superblock": true, 00:21:49.922 "num_base_bdevs": 2, 00:21:49.922 "num_base_bdevs_discovered": 2, 00:21:49.922 "num_base_bdevs_operational": 2, 00:21:49.922 "process": { 00:21:49.922 "type": "rebuild", 00:21:49.922 "target": "spare", 00:21:49.922 "progress": { 00:21:49.922 "blocks": 24576, 00:21:49.922 "percent": 38 00:21:49.922 } 00:21:49.922 }, 00:21:49.922 "base_bdevs_list": [ 00:21:49.922 { 00:21:49.922 "name": "spare", 00:21:49.922 "uuid": "b67f68ad-72c2-50dd-bfba-4b79f990a587", 00:21:49.922 "is_configured": true, 00:21:49.922 "data_offset": 2048, 00:21:49.922 "data_size": 63488 00:21:49.922 }, 00:21:49.922 { 00:21:49.922 "name": "BaseBdev2", 00:21:49.922 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:49.922 "is_configured": true, 00:21:49.922 "data_offset": 2048, 00:21:49.922 "data_size": 63488 00:21:49.922 } 00:21:49.922 ] 00:21:49.922 }' 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:21:49.922 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=777 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.922 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.181 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:50.181 "name": "raid_bdev1", 00:21:50.181 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:50.181 "strip_size_kb": 0, 00:21:50.181 "state": "online", 00:21:50.181 "raid_level": "raid1", 00:21:50.181 "superblock": true, 00:21:50.181 "num_base_bdevs": 2, 00:21:50.181 "num_base_bdevs_discovered": 2, 00:21:50.181 "num_base_bdevs_operational": 2, 00:21:50.181 "process": { 00:21:50.181 "type": "rebuild", 00:21:50.181 "target": "spare", 00:21:50.181 "progress": { 00:21:50.181 "blocks": 32768, 00:21:50.181 "percent": 51 00:21:50.181 } 00:21:50.181 }, 00:21:50.181 "base_bdevs_list": [ 00:21:50.181 { 00:21:50.181 "name": "spare", 00:21:50.181 "uuid": "b67f68ad-72c2-50dd-bfba-4b79f990a587", 00:21:50.181 "is_configured": true, 00:21:50.181 "data_offset": 2048, 00:21:50.181 "data_size": 63488 00:21:50.181 }, 00:21:50.181 { 00:21:50.181 "name": "BaseBdev2", 00:21:50.181 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:50.181 "is_configured": true, 00:21:50.181 "data_offset": 2048, 00:21:50.181 "data_size": 63488 00:21:50.181 } 00:21:50.181 ] 00:21:50.181 }' 00:21:50.181 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:50.181 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:50.181 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:50.439 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.439 10:36:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:51.373 10:36:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:51.373 10:36:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:51.373 10:36:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:51.373 10:36:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:51.373 10:36:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:51.373 10:36:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:51.373 10:36:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.373 10:36:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.631 10:36:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:51.631 "name": "raid_bdev1", 00:21:51.631 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:51.631 "strip_size_kb": 0, 00:21:51.631 "state": "online", 00:21:51.631 "raid_level": "raid1", 00:21:51.631 "superblock": true, 00:21:51.631 "num_base_bdevs": 2, 00:21:51.631 "num_base_bdevs_discovered": 2, 00:21:51.631 "num_base_bdevs_operational": 2, 00:21:51.631 "process": { 00:21:51.631 "type": "rebuild", 00:21:51.631 "target": "spare", 00:21:51.631 "progress": { 00:21:51.631 "blocks": 59392, 00:21:51.631 "percent": 93 00:21:51.631 } 00:21:51.631 }, 00:21:51.631 "base_bdevs_list": [ 00:21:51.631 { 00:21:51.631 "name": "spare", 00:21:51.631 "uuid": "b67f68ad-72c2-50dd-bfba-4b79f990a587", 00:21:51.631 "is_configured": true, 00:21:51.631 "data_offset": 2048, 00:21:51.631 "data_size": 63488 00:21:51.631 }, 00:21:51.631 { 00:21:51.631 "name": "BaseBdev2", 00:21:51.631 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:51.631 "is_configured": true, 00:21:51.631 "data_offset": 2048, 00:21:51.631 "data_size": 63488 00:21:51.631 } 00:21:51.631 ] 00:21:51.631 }' 00:21:51.631 10:36:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:51.631 10:36:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:51.631 10:36:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:51.631 10:36:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.631 10:36:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:51.631 [2024-07-25 10:36:55.332245] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:51.631 [2024-07-25 10:36:55.332306] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:51.631 [2024-07-25 10:36:55.332408] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.564 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:52.564 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.564 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:52.564 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:52.564 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:52.564 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:52.564 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.564 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.131 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:53.131 "name": "raid_bdev1", 00:21:53.131 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:53.131 "strip_size_kb": 0, 00:21:53.131 "state": "online", 00:21:53.131 "raid_level": "raid1", 00:21:53.131 "superblock": true, 00:21:53.131 "num_base_bdevs": 2, 00:21:53.131 "num_base_bdevs_discovered": 2, 00:21:53.131 "num_base_bdevs_operational": 2, 00:21:53.131 "base_bdevs_list": [ 00:21:53.131 { 00:21:53.131 "name": "spare", 00:21:53.131 "uuid": "b67f68ad-72c2-50dd-bfba-4b79f990a587", 00:21:53.131 "is_configured": true, 00:21:53.131 "data_offset": 2048, 00:21:53.131 "data_size": 63488 00:21:53.131 }, 00:21:53.131 { 00:21:53.131 "name": "BaseBdev2", 00:21:53.131 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:53.131 "is_configured": true, 00:21:53.131 "data_offset": 2048, 00:21:53.131 "data_size": 63488 00:21:53.131 } 00:21:53.131 ] 00:21:53.131 }' 00:21:53.131 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:53.131 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:53.131 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:53.131 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:53.131 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:21:53.131 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:53.131 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:53.131 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:53.131 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:53.131 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:53.131 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.131 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:53.389 "name": "raid_bdev1", 00:21:53.389 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:53.389 "strip_size_kb": 0, 00:21:53.389 "state": "online", 00:21:53.389 "raid_level": "raid1", 00:21:53.389 "superblock": true, 00:21:53.389 "num_base_bdevs": 2, 00:21:53.389 "num_base_bdevs_discovered": 2, 00:21:53.389 "num_base_bdevs_operational": 2, 00:21:53.389 "base_bdevs_list": [ 00:21:53.389 { 00:21:53.389 "name": "spare", 00:21:53.389 "uuid": "b67f68ad-72c2-50dd-bfba-4b79f990a587", 00:21:53.389 "is_configured": true, 00:21:53.389 "data_offset": 2048, 00:21:53.389 "data_size": 63488 00:21:53.389 }, 00:21:53.389 { 00:21:53.389 "name": "BaseBdev2", 00:21:53.389 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:53.389 "is_configured": true, 00:21:53.389 "data_offset": 2048, 00:21:53.389 "data_size": 63488 00:21:53.389 } 00:21:53.389 ] 00:21:53.389 }' 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.389 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.647 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.647 "name": "raid_bdev1", 00:21:53.647 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:53.647 "strip_size_kb": 0, 00:21:53.647 "state": "online", 00:21:53.647 "raid_level": "raid1", 00:21:53.647 "superblock": true, 00:21:53.647 "num_base_bdevs": 2, 00:21:53.647 "num_base_bdevs_discovered": 2, 00:21:53.647 "num_base_bdevs_operational": 2, 00:21:53.647 "base_bdevs_list": [ 00:21:53.647 { 00:21:53.647 "name": "spare", 00:21:53.647 "uuid": "b67f68ad-72c2-50dd-bfba-4b79f990a587", 00:21:53.647 "is_configured": true, 00:21:53.647 "data_offset": 2048, 00:21:53.647 "data_size": 63488 00:21:53.647 }, 00:21:53.647 { 00:21:53.647 "name": "BaseBdev2", 00:21:53.647 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:53.648 "is_configured": true, 00:21:53.648 "data_offset": 2048, 00:21:53.648 "data_size": 63488 00:21:53.648 } 00:21:53.648 ] 00:21:53.648 }' 00:21:53.648 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.648 10:36:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:54.213 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:54.471 [2024-07-25 10:36:58.105733] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:54.471 [2024-07-25 10:36:58.105758] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:54.471 [2024-07-25 10:36:58.105831] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:54.471 [2024-07-25 10:36:58.105894] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:54.471 [2024-07-25 10:36:58.105907] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25db010 name raid_bdev1, state offline 00:21:54.471 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.471 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:21:54.730 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:21:54.730 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:21:54.730 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:21:54.730 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:54.730 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:54.730 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:54.730 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:54.730 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:54.730 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:54.730 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:54.730 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:54.730 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:54.730 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:54.988 /dev/nbd0 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:54.988 1+0 records in 00:21:54.988 1+0 records out 00:21:54.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202718 s, 20.2 MB/s 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:54.988 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:55.246 /dev/nbd1 00:21:55.246 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:55.246 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:55.246 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:55.246 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:21:55.246 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:55.246 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:55.246 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:55.246 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:21:55.246 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:55.246 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:55.246 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:55.246 1+0 records in 00:21:55.246 1+0 records out 00:21:55.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021439 s, 19.1 MB/s 00:21:55.246 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:55.246 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:21:55.247 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:55.247 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:55.247 10:36:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:21:55.247 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:55.247 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:55.247 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:55.505 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:55.505 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:55.505 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:55.505 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:55.505 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:55.505 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:55.505 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:55.763 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:55.763 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:55.763 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:55.763 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:55.763 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:55.763 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:55.763 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:55.763 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:55.763 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:55.763 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:56.021 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:56.022 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:56.022 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:56.022 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:56.022 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:56.022 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:56.022 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:56.022 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:56.022 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:21:56.022 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:56.279 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:56.536 [2024-07-25 10:37:00.071684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:56.536 [2024-07-25 10:37:00.071762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.536 [2024-07-25 10:37:00.071787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x243f590 00:21:56.536 [2024-07-25 10:37:00.071801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.536 [2024-07-25 10:37:00.073477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.536 [2024-07-25 10:37:00.073501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:56.536 [2024-07-25 10:37:00.073617] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:56.536 [2024-07-25 10:37:00.073649] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:56.536 [2024-07-25 10:37:00.073764] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:56.536 spare 00:21:56.536 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:56.536 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:56.536 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:56.536 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:56.536 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:56.536 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:56.536 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.536 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.536 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.536 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.536 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.536 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.536 [2024-07-25 10:37:00.174088] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x24397b0 00:21:56.536 [2024-07-25 10:37:00.174113] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:56.536 [2024-07-25 10:37:00.174293] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25d9f70 00:21:56.536 [2024-07-25 10:37:00.174458] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24397b0 00:21:56.536 [2024-07-25 10:37:00.174475] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24397b0 00:21:56.536 [2024-07-25 10:37:00.174585] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.793 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.793 "name": "raid_bdev1", 00:21:56.793 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:56.793 "strip_size_kb": 0, 00:21:56.793 "state": "online", 00:21:56.793 "raid_level": "raid1", 00:21:56.793 "superblock": true, 00:21:56.793 "num_base_bdevs": 2, 00:21:56.793 "num_base_bdevs_discovered": 2, 00:21:56.793 "num_base_bdevs_operational": 2, 00:21:56.793 "base_bdevs_list": [ 00:21:56.793 { 00:21:56.793 "name": "spare", 00:21:56.793 "uuid": "b67f68ad-72c2-50dd-bfba-4b79f990a587", 00:21:56.793 "is_configured": true, 00:21:56.793 "data_offset": 2048, 00:21:56.793 "data_size": 63488 00:21:56.793 }, 00:21:56.793 { 00:21:56.793 "name": "BaseBdev2", 00:21:56.793 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:56.793 "is_configured": true, 00:21:56.793 "data_offset": 2048, 00:21:56.793 "data_size": 63488 00:21:56.793 } 00:21:56.793 ] 00:21:56.793 }' 00:21:56.793 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.793 10:37:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:57.356 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:57.356 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:57.356 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:57.356 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:57.356 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:57.356 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.356 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.612 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:57.612 "name": "raid_bdev1", 00:21:57.612 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:57.613 "strip_size_kb": 0, 00:21:57.613 "state": "online", 00:21:57.613 "raid_level": "raid1", 00:21:57.613 "superblock": true, 00:21:57.613 "num_base_bdevs": 2, 00:21:57.613 "num_base_bdevs_discovered": 2, 00:21:57.613 "num_base_bdevs_operational": 2, 00:21:57.613 "base_bdevs_list": [ 00:21:57.613 { 00:21:57.613 "name": "spare", 00:21:57.613 "uuid": "b67f68ad-72c2-50dd-bfba-4b79f990a587", 00:21:57.613 "is_configured": true, 00:21:57.613 "data_offset": 2048, 00:21:57.613 "data_size": 63488 00:21:57.613 }, 00:21:57.613 { 00:21:57.613 "name": "BaseBdev2", 00:21:57.613 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:57.613 "is_configured": true, 00:21:57.613 "data_offset": 2048, 00:21:57.613 "data_size": 63488 00:21:57.613 } 00:21:57.613 ] 00:21:57.613 }' 00:21:57.613 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:57.613 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:57.613 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:57.613 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:57.613 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.613 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:57.870 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.870 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:58.127 [2024-07-25 10:37:01.736201] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:58.127 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:58.127 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:58.127 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:58.127 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:58.127 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:58.127 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:58.127 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:58.127 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:58.127 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:58.127 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:58.127 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.127 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.385 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:58.385 "name": "raid_bdev1", 00:21:58.385 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:21:58.385 "strip_size_kb": 0, 00:21:58.385 "state": "online", 00:21:58.385 "raid_level": "raid1", 00:21:58.385 "superblock": true, 00:21:58.385 "num_base_bdevs": 2, 00:21:58.385 "num_base_bdevs_discovered": 1, 00:21:58.385 "num_base_bdevs_operational": 1, 00:21:58.385 "base_bdevs_list": [ 00:21:58.385 { 00:21:58.385 "name": null, 00:21:58.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.385 "is_configured": false, 00:21:58.385 "data_offset": 2048, 00:21:58.385 "data_size": 63488 00:21:58.385 }, 00:21:58.385 { 00:21:58.385 "name": "BaseBdev2", 00:21:58.385 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:21:58.385 "is_configured": true, 00:21:58.385 "data_offset": 2048, 00:21:58.385 "data_size": 63488 00:21:58.385 } 00:21:58.385 ] 00:21:58.385 }' 00:21:58.385 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:58.385 10:37:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:58.949 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:59.206 [2024-07-25 10:37:02.774946] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:59.206 [2024-07-25 10:37:02.775150] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:59.206 [2024-07-25 10:37:02.775172] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:59.206 [2024-07-25 10:37:02.775202] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:59.206 [2024-07-25 10:37:02.781866] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25d9f70 00:21:59.206 [2024-07-25 10:37:02.784078] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:59.206 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:22:00.139 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.139 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:00.139 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:00.139 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:00.139 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:00.139 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.139 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.396 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:00.396 "name": "raid_bdev1", 00:22:00.396 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:22:00.396 "strip_size_kb": 0, 00:22:00.396 "state": "online", 00:22:00.396 "raid_level": "raid1", 00:22:00.396 "superblock": true, 00:22:00.396 "num_base_bdevs": 2, 00:22:00.396 "num_base_bdevs_discovered": 2, 00:22:00.396 "num_base_bdevs_operational": 2, 00:22:00.396 "process": { 00:22:00.396 "type": "rebuild", 00:22:00.396 "target": "spare", 00:22:00.396 "progress": { 00:22:00.396 "blocks": 24576, 00:22:00.396 "percent": 38 00:22:00.396 } 00:22:00.396 }, 00:22:00.396 "base_bdevs_list": [ 00:22:00.396 { 00:22:00.396 "name": "spare", 00:22:00.396 "uuid": "b67f68ad-72c2-50dd-bfba-4b79f990a587", 00:22:00.396 "is_configured": true, 00:22:00.396 "data_offset": 2048, 00:22:00.396 "data_size": 63488 00:22:00.396 }, 00:22:00.396 { 00:22:00.396 "name": "BaseBdev2", 00:22:00.396 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:22:00.396 "is_configured": true, 00:22:00.396 "data_offset": 2048, 00:22:00.396 "data_size": 63488 00:22:00.396 } 00:22:00.396 ] 00:22:00.396 }' 00:22:00.396 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:00.396 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:00.396 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:00.653 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.653 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:00.911 [2024-07-25 10:37:04.366413] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:00.911 [2024-07-25 10:37:04.397276] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:00.911 [2024-07-25 10:37:04.397329] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.911 [2024-07-25 10:37:04.397350] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:00.911 [2024-07-25 10:37:04.397360] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:00.911 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:00.911 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:00.911 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:00.911 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:00.911 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:00.911 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:00.911 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.911 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.911 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.911 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.911 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.911 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.169 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:01.169 "name": "raid_bdev1", 00:22:01.169 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:22:01.169 "strip_size_kb": 0, 00:22:01.169 "state": "online", 00:22:01.169 "raid_level": "raid1", 00:22:01.169 "superblock": true, 00:22:01.169 "num_base_bdevs": 2, 00:22:01.169 "num_base_bdevs_discovered": 1, 00:22:01.169 "num_base_bdevs_operational": 1, 00:22:01.169 "base_bdevs_list": [ 00:22:01.169 { 00:22:01.169 "name": null, 00:22:01.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.169 "is_configured": false, 00:22:01.169 "data_offset": 2048, 00:22:01.169 "data_size": 63488 00:22:01.169 }, 00:22:01.169 { 00:22:01.169 "name": "BaseBdev2", 00:22:01.169 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:22:01.169 "is_configured": true, 00:22:01.169 "data_offset": 2048, 00:22:01.169 "data_size": 63488 00:22:01.169 } 00:22:01.169 ] 00:22:01.169 }' 00:22:01.169 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:01.169 10:37:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.734 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:01.734 [2024-07-25 10:37:05.413478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:01.734 [2024-07-25 10:37:05.413541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.734 [2024-07-25 10:37:05.413568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d9f10 00:22:01.734 [2024-07-25 10:37:05.413584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.734 [2024-07-25 10:37:05.414035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.734 [2024-07-25 10:37:05.414062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:01.734 [2024-07-25 10:37:05.414184] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:01.734 [2024-07-25 10:37:05.414203] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:01.734 [2024-07-25 10:37:05.414214] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:01.734 [2024-07-25 10:37:05.414238] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:01.734 [2024-07-25 10:37:05.420830] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2439e30 00:22:01.734 spare 00:22:01.734 [2024-07-25 10:37:05.422378] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:01.734 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:22:03.107 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:03.107 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:03.107 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:03.107 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:03.107 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:03.107 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.107 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.107 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:03.107 "name": "raid_bdev1", 00:22:03.107 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:22:03.107 "strip_size_kb": 0, 00:22:03.107 "state": "online", 00:22:03.107 "raid_level": "raid1", 00:22:03.107 "superblock": true, 00:22:03.107 "num_base_bdevs": 2, 00:22:03.107 "num_base_bdevs_discovered": 2, 00:22:03.107 "num_base_bdevs_operational": 2, 00:22:03.107 "process": { 00:22:03.107 "type": "rebuild", 00:22:03.107 "target": "spare", 00:22:03.107 "progress": { 00:22:03.107 "blocks": 24576, 00:22:03.107 "percent": 38 00:22:03.107 } 00:22:03.107 }, 00:22:03.107 "base_bdevs_list": [ 00:22:03.107 { 00:22:03.107 "name": "spare", 00:22:03.107 "uuid": "b67f68ad-72c2-50dd-bfba-4b79f990a587", 00:22:03.107 "is_configured": true, 00:22:03.107 "data_offset": 2048, 00:22:03.107 "data_size": 63488 00:22:03.107 }, 00:22:03.107 { 00:22:03.107 "name": "BaseBdev2", 00:22:03.107 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:22:03.107 "is_configured": true, 00:22:03.107 "data_offset": 2048, 00:22:03.107 "data_size": 63488 00:22:03.107 } 00:22:03.107 ] 00:22:03.107 }' 00:22:03.107 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:03.107 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:03.107 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:03.107 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:03.107 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:03.365 [2024-07-25 10:37:07.009289] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:03.365 [2024-07-25 10:37:07.035685] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:03.365 [2024-07-25 10:37:07.035746] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.365 [2024-07-25 10:37:07.035764] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:03.365 [2024-07-25 10:37:07.035773] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:03.365 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:03.365 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:03.365 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:03.366 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:03.366 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:03.366 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:03.366 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.366 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.366 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.366 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.366 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.366 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.624 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.624 "name": "raid_bdev1", 00:22:03.624 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:22:03.624 "strip_size_kb": 0, 00:22:03.624 "state": "online", 00:22:03.624 "raid_level": "raid1", 00:22:03.624 "superblock": true, 00:22:03.624 "num_base_bdevs": 2, 00:22:03.624 "num_base_bdevs_discovered": 1, 00:22:03.624 "num_base_bdevs_operational": 1, 00:22:03.624 "base_bdevs_list": [ 00:22:03.624 { 00:22:03.624 "name": null, 00:22:03.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.624 "is_configured": false, 00:22:03.624 "data_offset": 2048, 00:22:03.624 "data_size": 63488 00:22:03.624 }, 00:22:03.624 { 00:22:03.624 "name": "BaseBdev2", 00:22:03.624 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:22:03.624 "is_configured": true, 00:22:03.624 "data_offset": 2048, 00:22:03.624 "data_size": 63488 00:22:03.624 } 00:22:03.624 ] 00:22:03.624 }' 00:22:03.624 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.624 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.556 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:04.556 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:04.556 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:04.556 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:04.556 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:04.556 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.556 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.556 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:04.556 "name": "raid_bdev1", 00:22:04.556 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:22:04.556 "strip_size_kb": 0, 00:22:04.556 "state": "online", 00:22:04.556 "raid_level": "raid1", 00:22:04.556 "superblock": true, 00:22:04.556 "num_base_bdevs": 2, 00:22:04.556 "num_base_bdevs_discovered": 1, 00:22:04.556 "num_base_bdevs_operational": 1, 00:22:04.556 "base_bdevs_list": [ 00:22:04.556 { 00:22:04.556 "name": null, 00:22:04.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.556 "is_configured": false, 00:22:04.556 "data_offset": 2048, 00:22:04.556 "data_size": 63488 00:22:04.556 }, 00:22:04.556 { 00:22:04.556 "name": "BaseBdev2", 00:22:04.557 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:22:04.557 "is_configured": true, 00:22:04.557 "data_offset": 2048, 00:22:04.557 "data_size": 63488 00:22:04.557 } 00:22:04.557 ] 00:22:04.557 }' 00:22:04.557 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:04.557 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:04.557 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:04.557 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:04.557 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:04.816 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:05.106 [2024-07-25 10:37:08.709918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:05.106 [2024-07-25 10:37:08.709992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.106 [2024-07-25 10:37:08.710018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2440660 00:22:05.106 [2024-07-25 10:37:08.710031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.106 [2024-07-25 10:37:08.710470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.106 [2024-07-25 10:37:08.710496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:05.106 [2024-07-25 10:37:08.710595] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:05.106 [2024-07-25 10:37:08.710614] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:05.106 [2024-07-25 10:37:08.710624] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:05.106 BaseBdev1 00:22:05.106 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:22:06.039 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:06.039 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:06.039 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:06.039 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:06.039 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:06.039 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:06.039 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:06.039 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:06.039 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:06.039 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:06.039 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.039 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.298 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:06.298 "name": "raid_bdev1", 00:22:06.298 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:22:06.298 "strip_size_kb": 0, 00:22:06.298 "state": "online", 00:22:06.298 "raid_level": "raid1", 00:22:06.298 "superblock": true, 00:22:06.298 "num_base_bdevs": 2, 00:22:06.298 "num_base_bdevs_discovered": 1, 00:22:06.298 "num_base_bdevs_operational": 1, 00:22:06.298 "base_bdevs_list": [ 00:22:06.298 { 00:22:06.298 "name": null, 00:22:06.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.298 "is_configured": false, 00:22:06.298 "data_offset": 2048, 00:22:06.298 "data_size": 63488 00:22:06.298 }, 00:22:06.298 { 00:22:06.298 "name": "BaseBdev2", 00:22:06.298 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:22:06.298 "is_configured": true, 00:22:06.298 "data_offset": 2048, 00:22:06.298 "data_size": 63488 00:22:06.298 } 00:22:06.298 ] 00:22:06.298 }' 00:22:06.298 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:06.298 10:37:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.863 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:06.863 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:06.863 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:06.863 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:06.863 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:06.863 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.863 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.121 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:07.121 "name": "raid_bdev1", 00:22:07.121 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:22:07.121 "strip_size_kb": 0, 00:22:07.121 "state": "online", 00:22:07.121 "raid_level": "raid1", 00:22:07.121 "superblock": true, 00:22:07.121 "num_base_bdevs": 2, 00:22:07.121 "num_base_bdevs_discovered": 1, 00:22:07.121 "num_base_bdevs_operational": 1, 00:22:07.121 "base_bdevs_list": [ 00:22:07.121 { 00:22:07.121 "name": null, 00:22:07.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.121 "is_configured": false, 00:22:07.121 "data_offset": 2048, 00:22:07.121 "data_size": 63488 00:22:07.121 }, 00:22:07.122 { 00:22:07.122 "name": "BaseBdev2", 00:22:07.122 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:22:07.122 "is_configured": true, 00:22:07.122 "data_offset": 2048, 00:22:07.122 "data_size": 63488 00:22:07.122 } 00:22:07.122 ] 00:22:07.122 }' 00:22:07.122 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:07.122 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:07.122 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:07.380 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:07.380 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:07.380 10:37:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:22:07.380 10:37:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:07.380 10:37:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:07.380 10:37:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.380 10:37:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:07.380 10:37:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.380 10:37:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:07.380 10:37:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.380 10:37:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:07.380 10:37:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:07.380 10:37:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:07.380 [2024-07-25 10:37:11.076251] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:07.380 [2024-07-25 10:37:11.076439] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:07.380 [2024-07-25 10:37:11.076459] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:07.380 request: 00:22:07.380 { 00:22:07.380 "base_bdev": "BaseBdev1", 00:22:07.380 "raid_bdev": "raid_bdev1", 00:22:07.380 "method": "bdev_raid_add_base_bdev", 00:22:07.380 "req_id": 1 00:22:07.380 } 00:22:07.380 Got JSON-RPC error response 00:22:07.380 response: 00:22:07.380 { 00:22:07.380 "code": -22, 00:22:07.380 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:07.380 } 00:22:07.639 10:37:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:22:07.639 10:37:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:07.639 10:37:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:07.639 10:37:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:07.639 10:37:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:22:08.573 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:08.573 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:08.573 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:08.573 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:08.573 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:08.573 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:08.573 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.573 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.573 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.573 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.573 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.573 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.832 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.832 "name": "raid_bdev1", 00:22:08.832 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:22:08.832 "strip_size_kb": 0, 00:22:08.832 "state": "online", 00:22:08.832 "raid_level": "raid1", 00:22:08.832 "superblock": true, 00:22:08.832 "num_base_bdevs": 2, 00:22:08.832 "num_base_bdevs_discovered": 1, 00:22:08.832 "num_base_bdevs_operational": 1, 00:22:08.832 "base_bdevs_list": [ 00:22:08.832 { 00:22:08.832 "name": null, 00:22:08.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.832 "is_configured": false, 00:22:08.832 "data_offset": 2048, 00:22:08.832 "data_size": 63488 00:22:08.832 }, 00:22:08.832 { 00:22:08.832 "name": "BaseBdev2", 00:22:08.832 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:22:08.832 "is_configured": true, 00:22:08.832 "data_offset": 2048, 00:22:08.832 "data_size": 63488 00:22:08.832 } 00:22:08.832 ] 00:22:08.832 }' 00:22:08.832 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.832 10:37:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.397 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:09.397 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:09.397 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:09.397 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:09.397 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:09.397 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.397 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:09.656 "name": "raid_bdev1", 00:22:09.656 "uuid": "98c349a9-a0a1-4d4b-bcf2-a8bcfcbfe911", 00:22:09.656 "strip_size_kb": 0, 00:22:09.656 "state": "online", 00:22:09.656 "raid_level": "raid1", 00:22:09.656 "superblock": true, 00:22:09.656 "num_base_bdevs": 2, 00:22:09.656 "num_base_bdevs_discovered": 1, 00:22:09.656 "num_base_bdevs_operational": 1, 00:22:09.656 "base_bdevs_list": [ 00:22:09.656 { 00:22:09.656 "name": null, 00:22:09.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.656 "is_configured": false, 00:22:09.656 "data_offset": 2048, 00:22:09.656 "data_size": 63488 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "name": "BaseBdev2", 00:22:09.656 "uuid": "b07d28e5-842a-5e06-b72f-c1b02f660852", 00:22:09.656 "is_configured": true, 00:22:09.656 "data_offset": 2048, 00:22:09.656 "data_size": 63488 00:22:09.656 } 00:22:09.656 ] 00:22:09.656 }' 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2433608 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2433608 ']' 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 2433608 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2433608 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2433608' 00:22:09.656 killing process with pid 2433608 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 2433608 00:22:09.656 Received shutdown signal, test time was about 60.000000 seconds 00:22:09.656 00:22:09.656 Latency(us) 00:22:09.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.656 =================================================================================================================== 00:22:09.656 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:09.656 [2024-07-25 10:37:13.284016] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:09.656 10:37:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 2433608 00:22:09.656 [2024-07-25 10:37:13.284132] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:09.656 [2024-07-25 10:37:13.284193] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:09.656 [2024-07-25 10:37:13.284208] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24397b0 name raid_bdev1, state offline 00:22:09.656 [2024-07-25 10:37:13.321238] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:09.913 10:37:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:22:09.913 00:22:09.913 real 0m37.019s 00:22:09.913 user 0m53.833s 00:22:09.913 sys 0m5.795s 00:22:09.913 10:37:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:09.913 10:37:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.913 ************************************ 00:22:09.913 END TEST raid_rebuild_test_sb 00:22:09.913 ************************************ 00:22:09.913 10:37:13 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:22:10.171 10:37:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:10.171 10:37:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:10.171 10:37:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:10.171 ************************************ 00:22:10.171 START TEST raid_rebuild_test_io 00:22:10.171 ************************************ 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2438442 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2438442 /var/tmp/spdk-raid.sock 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 2438442 ']' 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:10.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.172 10:37:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:10.172 [2024-07-25 10:37:13.706367] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:22:10.172 [2024-07-25 10:37:13.706441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2438442 ] 00:22:10.172 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:10.172 Zero copy mechanism will not be used. 00:22:10.172 [2024-07-25 10:37:13.787573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.430 [2024-07-25 10:37:13.910676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.430 [2024-07-25 10:37:13.987400] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:10.430 [2024-07-25 10:37:13.987440] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:10.996 10:37:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:10.996 10:37:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:22:10.996 10:37:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:10.996 10:37:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:11.254 BaseBdev1_malloc 00:22:11.254 10:37:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:11.511 [2024-07-25 10:37:15.127013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:11.511 [2024-07-25 10:37:15.127072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.511 [2024-07-25 10:37:15.127098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2057430 00:22:11.511 [2024-07-25 10:37:15.127123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.511 [2024-07-25 10:37:15.128656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.511 [2024-07-25 10:37:15.128684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:11.511 BaseBdev1 00:22:11.511 10:37:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:11.511 10:37:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:11.770 BaseBdev2_malloc 00:22:11.770 10:37:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:12.028 [2024-07-25 10:37:15.659494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:12.028 [2024-07-25 10:37:15.659554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.028 [2024-07-25 10:37:15.659585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21faa20 00:22:12.028 [2024-07-25 10:37:15.659601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.028 [2024-07-25 10:37:15.661251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.028 [2024-07-25 10:37:15.661280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:12.028 BaseBdev2 00:22:12.028 10:37:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:12.286 spare_malloc 00:22:12.287 10:37:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:12.544 spare_delay 00:22:12.544 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:12.802 [2024-07-25 10:37:16.395958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:12.802 [2024-07-25 10:37:16.396014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.802 [2024-07-25 10:37:16.396041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x204f070 00:22:12.802 [2024-07-25 10:37:16.396056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.802 [2024-07-25 10:37:16.397500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.802 [2024-07-25 10:37:16.397528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:12.802 spare 00:22:12.802 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:13.060 [2024-07-25 10:37:16.632604] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:13.060 [2024-07-25 10:37:16.633804] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:13.060 [2024-07-25 10:37:16.633888] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x21f2010 00:22:13.060 [2024-07-25 10:37:16.633905] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:13.060 [2024-07-25 10:37:16.634087] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20524c0 00:22:13.060 [2024-07-25 10:37:16.634277] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21f2010 00:22:13.060 [2024-07-25 10:37:16.634293] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x21f2010 00:22:13.060 [2024-07-25 10:37:16.634410] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.060 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:13.060 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:13.060 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:13.060 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:13.060 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:13.060 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:13.060 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.060 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.060 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.060 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.060 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.060 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.318 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.318 "name": "raid_bdev1", 00:22:13.318 "uuid": "807f61f2-b7ac-4729-8cb0-18804f70aeaf", 00:22:13.318 "strip_size_kb": 0, 00:22:13.318 "state": "online", 00:22:13.318 "raid_level": "raid1", 00:22:13.318 "superblock": false, 00:22:13.318 "num_base_bdevs": 2, 00:22:13.318 "num_base_bdevs_discovered": 2, 00:22:13.319 "num_base_bdevs_operational": 2, 00:22:13.319 "base_bdevs_list": [ 00:22:13.319 { 00:22:13.319 "name": "BaseBdev1", 00:22:13.319 "uuid": "951ded44-6336-59a7-852a-7ff20cfe53de", 00:22:13.319 "is_configured": true, 00:22:13.319 "data_offset": 0, 00:22:13.319 "data_size": 65536 00:22:13.319 }, 00:22:13.319 { 00:22:13.319 "name": "BaseBdev2", 00:22:13.319 "uuid": "9f86c741-9772-5c6a-8c0d-20f4d6d6c021", 00:22:13.319 "is_configured": true, 00:22:13.319 "data_offset": 0, 00:22:13.319 "data_size": 65536 00:22:13.319 } 00:22:13.319 ] 00:22:13.319 }' 00:22:13.319 10:37:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.319 10:37:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:13.884 10:37:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:13.884 10:37:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:22:14.142 [2024-07-25 10:37:17.703685] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.142 10:37:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:22:14.142 10:37:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.142 10:37:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:14.400 10:37:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:22:14.400 10:37:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:22:14.400 10:37:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:14.400 10:37:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:14.400 [2024-07-25 10:37:18.083198] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21f0ea0 00:22:14.400 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:14.400 Zero copy mechanism will not be used. 00:22:14.400 Running I/O for 60 seconds... 00:22:14.659 [2024-07-25 10:37:18.207844] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:14.659 [2024-07-25 10:37:18.215212] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x21f0ea0 00:22:14.659 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:14.659 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:14.659 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:14.659 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:14.659 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:14.659 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:14.659 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.659 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.659 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.659 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.659 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.659 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.917 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.917 "name": "raid_bdev1", 00:22:14.917 "uuid": "807f61f2-b7ac-4729-8cb0-18804f70aeaf", 00:22:14.917 "strip_size_kb": 0, 00:22:14.917 "state": "online", 00:22:14.917 "raid_level": "raid1", 00:22:14.917 "superblock": false, 00:22:14.917 "num_base_bdevs": 2, 00:22:14.917 "num_base_bdevs_discovered": 1, 00:22:14.917 "num_base_bdevs_operational": 1, 00:22:14.917 "base_bdevs_list": [ 00:22:14.917 { 00:22:14.917 "name": null, 00:22:14.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.918 "is_configured": false, 00:22:14.918 "data_offset": 0, 00:22:14.918 "data_size": 65536 00:22:14.918 }, 00:22:14.918 { 00:22:14.918 "name": "BaseBdev2", 00:22:14.918 "uuid": "9f86c741-9772-5c6a-8c0d-20f4d6d6c021", 00:22:14.918 "is_configured": true, 00:22:14.918 "data_offset": 0, 00:22:14.918 "data_size": 65536 00:22:14.918 } 00:22:14.918 ] 00:22:14.918 }' 00:22:14.918 10:37:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.918 10:37:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:15.484 10:37:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:15.742 [2024-07-25 10:37:19.371653] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:15.742 10:37:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:15.742 [2024-07-25 10:37:19.440012] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20524c0 00:22:15.742 [2024-07-25 10:37:19.441976] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:16.000 [2024-07-25 10:37:19.563039] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:16.000 [2024-07-25 10:37:19.563649] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:16.259 [2024-07-25 10:37:19.790797] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:16.259 [2024-07-25 10:37:19.791124] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:16.517 [2024-07-25 10:37:20.136565] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:16.517 [2024-07-25 10:37:20.136907] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:16.775 [2024-07-25 10:37:20.269672] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:16.775 10:37:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:16.775 10:37:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:16.775 10:37:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:16.775 10:37:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:16.775 10:37:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:16.775 10:37:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.775 10:37:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.033 10:37:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:17.033 "name": "raid_bdev1", 00:22:17.033 "uuid": "807f61f2-b7ac-4729-8cb0-18804f70aeaf", 00:22:17.033 "strip_size_kb": 0, 00:22:17.033 "state": "online", 00:22:17.033 "raid_level": "raid1", 00:22:17.033 "superblock": false, 00:22:17.033 "num_base_bdevs": 2, 00:22:17.033 "num_base_bdevs_discovered": 2, 00:22:17.033 "num_base_bdevs_operational": 2, 00:22:17.033 "process": { 00:22:17.033 "type": "rebuild", 00:22:17.033 "target": "spare", 00:22:17.033 "progress": { 00:22:17.033 "blocks": 14336, 00:22:17.033 "percent": 21 00:22:17.033 } 00:22:17.033 }, 00:22:17.033 "base_bdevs_list": [ 00:22:17.033 { 00:22:17.033 "name": "spare", 00:22:17.033 "uuid": "8bd639d6-7125-519f-b401-e215668c8fa2", 00:22:17.033 "is_configured": true, 00:22:17.033 "data_offset": 0, 00:22:17.033 "data_size": 65536 00:22:17.033 }, 00:22:17.033 { 00:22:17.033 "name": "BaseBdev2", 00:22:17.033 "uuid": "9f86c741-9772-5c6a-8c0d-20f4d6d6c021", 00:22:17.033 "is_configured": true, 00:22:17.033 "data_offset": 0, 00:22:17.033 "data_size": 65536 00:22:17.033 } 00:22:17.033 ] 00:22:17.033 }' 00:22:17.033 10:37:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:17.033 [2024-07-25 10:37:20.727266] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:17.033 [2024-07-25 10:37:20.727553] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:17.301 10:37:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:17.301 10:37:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:17.301 10:37:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:17.301 10:37:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:17.567 [2024-07-25 10:37:21.058586] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:17.567 [2024-07-25 10:37:21.067311] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:17.567 [2024-07-25 10:37:21.180051] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:17.825 [2024-07-25 10:37:21.287868] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:17.825 [2024-07-25 10:37:21.296279] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.825 [2024-07-25 10:37:21.296312] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:17.825 [2024-07-25 10:37:21.296324] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:17.825 [2024-07-25 10:37:21.309419] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x21f0ea0 00:22:17.825 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:17.825 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:17.825 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:17.825 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:17.825 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:17.825 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:17.825 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.825 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.825 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.825 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.825 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.825 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.083 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:18.083 "name": "raid_bdev1", 00:22:18.083 "uuid": "807f61f2-b7ac-4729-8cb0-18804f70aeaf", 00:22:18.083 "strip_size_kb": 0, 00:22:18.083 "state": "online", 00:22:18.083 "raid_level": "raid1", 00:22:18.083 "superblock": false, 00:22:18.083 "num_base_bdevs": 2, 00:22:18.083 "num_base_bdevs_discovered": 1, 00:22:18.083 "num_base_bdevs_operational": 1, 00:22:18.083 "base_bdevs_list": [ 00:22:18.083 { 00:22:18.083 "name": null, 00:22:18.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.083 "is_configured": false, 00:22:18.083 "data_offset": 0, 00:22:18.083 "data_size": 65536 00:22:18.083 }, 00:22:18.083 { 00:22:18.083 "name": "BaseBdev2", 00:22:18.083 "uuid": "9f86c741-9772-5c6a-8c0d-20f4d6d6c021", 00:22:18.083 "is_configured": true, 00:22:18.083 "data_offset": 0, 00:22:18.083 "data_size": 65536 00:22:18.083 } 00:22:18.083 ] 00:22:18.083 }' 00:22:18.083 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:18.083 10:37:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:18.650 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:18.650 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:18.650 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:18.650 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:18.650 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:18.650 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.650 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.908 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:18.908 "name": "raid_bdev1", 00:22:18.908 "uuid": "807f61f2-b7ac-4729-8cb0-18804f70aeaf", 00:22:18.908 "strip_size_kb": 0, 00:22:18.908 "state": "online", 00:22:18.908 "raid_level": "raid1", 00:22:18.908 "superblock": false, 00:22:18.908 "num_base_bdevs": 2, 00:22:18.908 "num_base_bdevs_discovered": 1, 00:22:18.908 "num_base_bdevs_operational": 1, 00:22:18.908 "base_bdevs_list": [ 00:22:18.908 { 00:22:18.908 "name": null, 00:22:18.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.908 "is_configured": false, 00:22:18.908 "data_offset": 0, 00:22:18.908 "data_size": 65536 00:22:18.908 }, 00:22:18.908 { 00:22:18.908 "name": "BaseBdev2", 00:22:18.908 "uuid": "9f86c741-9772-5c6a-8c0d-20f4d6d6c021", 00:22:18.908 "is_configured": true, 00:22:18.908 "data_offset": 0, 00:22:18.908 "data_size": 65536 00:22:18.908 } 00:22:18.908 ] 00:22:18.908 }' 00:22:18.908 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:18.908 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:18.908 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:18.908 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:18.908 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:19.167 [2024-07-25 10:37:22.706723] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:19.167 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:19.167 [2024-07-25 10:37:22.770997] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2053380 00:22:19.167 [2024-07-25 10:37:22.772538] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:19.425 [2024-07-25 10:37:22.889699] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:19.425 [2024-07-25 10:37:22.890195] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:19.425 [2024-07-25 10:37:23.109429] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:19.425 [2024-07-25 10:37:23.109744] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:19.991 [2024-07-25 10:37:23.453202] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:19.991 [2024-07-25 10:37:23.587524] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:20.249 10:37:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.249 10:37:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:20.249 10:37:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:20.249 10:37:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:20.249 10:37:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:20.249 10:37:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.249 10:37:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.249 [2024-07-25 10:37:23.924017] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:20.249 [2024-07-25 10:37:23.924358] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:20.508 [2024-07-25 10:37:24.034776] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:20.508 "name": "raid_bdev1", 00:22:20.508 "uuid": "807f61f2-b7ac-4729-8cb0-18804f70aeaf", 00:22:20.508 "strip_size_kb": 0, 00:22:20.508 "state": "online", 00:22:20.508 "raid_level": "raid1", 00:22:20.508 "superblock": false, 00:22:20.508 "num_base_bdevs": 2, 00:22:20.508 "num_base_bdevs_discovered": 2, 00:22:20.508 "num_base_bdevs_operational": 2, 00:22:20.508 "process": { 00:22:20.508 "type": "rebuild", 00:22:20.508 "target": "spare", 00:22:20.508 "progress": { 00:22:20.508 "blocks": 14336, 00:22:20.508 "percent": 21 00:22:20.508 } 00:22:20.508 }, 00:22:20.508 "base_bdevs_list": [ 00:22:20.508 { 00:22:20.508 "name": "spare", 00:22:20.508 "uuid": "8bd639d6-7125-519f-b401-e215668c8fa2", 00:22:20.508 "is_configured": true, 00:22:20.508 "data_offset": 0, 00:22:20.508 "data_size": 65536 00:22:20.508 }, 00:22:20.508 { 00:22:20.508 "name": "BaseBdev2", 00:22:20.508 "uuid": "9f86c741-9772-5c6a-8c0d-20f4d6d6c021", 00:22:20.508 "is_configured": true, 00:22:20.508 "data_offset": 0, 00:22:20.508 "data_size": 65536 00:22:20.508 } 00:22:20.508 ] 00:22:20.508 }' 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=808 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.508 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.767 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:20.767 "name": "raid_bdev1", 00:22:20.767 "uuid": "807f61f2-b7ac-4729-8cb0-18804f70aeaf", 00:22:20.767 "strip_size_kb": 0, 00:22:20.767 "state": "online", 00:22:20.767 "raid_level": "raid1", 00:22:20.767 "superblock": false, 00:22:20.767 "num_base_bdevs": 2, 00:22:20.767 "num_base_bdevs_discovered": 2, 00:22:20.767 "num_base_bdevs_operational": 2, 00:22:20.767 "process": { 00:22:20.767 "type": "rebuild", 00:22:20.767 "target": "spare", 00:22:20.767 "progress": { 00:22:20.767 "blocks": 20480, 00:22:20.767 "percent": 31 00:22:20.767 } 00:22:20.767 }, 00:22:20.767 "base_bdevs_list": [ 00:22:20.767 { 00:22:20.767 "name": "spare", 00:22:20.767 "uuid": "8bd639d6-7125-519f-b401-e215668c8fa2", 00:22:20.767 "is_configured": true, 00:22:20.767 "data_offset": 0, 00:22:20.767 "data_size": 65536 00:22:20.767 }, 00:22:20.767 { 00:22:20.767 "name": "BaseBdev2", 00:22:20.767 "uuid": "9f86c741-9772-5c6a-8c0d-20f4d6d6c021", 00:22:20.767 "is_configured": true, 00:22:20.767 "data_offset": 0, 00:22:20.767 "data_size": 65536 00:22:20.767 } 00:22:20.767 ] 00:22:20.767 }' 00:22:20.767 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:20.767 [2024-07-25 10:37:24.401846] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:20.767 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.767 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:20.767 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.767 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:21.334 [2024-07-25 10:37:24.761581] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:21.334 [2024-07-25 10:37:24.761842] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:21.592 [2024-07-25 10:37:25.113741] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:21.592 [2024-07-25 10:37:25.240274] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:21.850 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:21.850 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:21.850 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:21.850 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:21.851 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:21.851 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:21.851 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.851 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.145 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:22.145 "name": "raid_bdev1", 00:22:22.145 "uuid": "807f61f2-b7ac-4729-8cb0-18804f70aeaf", 00:22:22.145 "strip_size_kb": 0, 00:22:22.145 "state": "online", 00:22:22.145 "raid_level": "raid1", 00:22:22.145 "superblock": false, 00:22:22.145 "num_base_bdevs": 2, 00:22:22.145 "num_base_bdevs_discovered": 2, 00:22:22.145 "num_base_bdevs_operational": 2, 00:22:22.145 "process": { 00:22:22.145 "type": "rebuild", 00:22:22.145 "target": "spare", 00:22:22.145 "progress": { 00:22:22.145 "blocks": 40960, 00:22:22.145 "percent": 62 00:22:22.145 } 00:22:22.145 }, 00:22:22.145 "base_bdevs_list": [ 00:22:22.145 { 00:22:22.145 "name": "spare", 00:22:22.145 "uuid": "8bd639d6-7125-519f-b401-e215668c8fa2", 00:22:22.145 "is_configured": true, 00:22:22.145 "data_offset": 0, 00:22:22.145 "data_size": 65536 00:22:22.145 }, 00:22:22.145 { 00:22:22.145 "name": "BaseBdev2", 00:22:22.145 "uuid": "9f86c741-9772-5c6a-8c0d-20f4d6d6c021", 00:22:22.145 "is_configured": true, 00:22:22.145 "data_offset": 0, 00:22:22.145 "data_size": 65536 00:22:22.145 } 00:22:22.145 ] 00:22:22.145 }' 00:22:22.145 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:22.145 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:22.145 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:22.145 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:22.145 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:22.719 [2024-07-25 10:37:26.252027] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:23.286 [2024-07-25 10:37:26.696290] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:23.286 [2024-07-25 10:37:26.696593] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:23.286 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:23.286 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:23.286 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:23.286 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:23.286 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:23.286 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:23.286 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.286 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.545 10:37:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:23.545 "name": "raid_bdev1", 00:22:23.545 "uuid": "807f61f2-b7ac-4729-8cb0-18804f70aeaf", 00:22:23.545 "strip_size_kb": 0, 00:22:23.545 "state": "online", 00:22:23.545 "raid_level": "raid1", 00:22:23.545 "superblock": false, 00:22:23.545 "num_base_bdevs": 2, 00:22:23.545 "num_base_bdevs_discovered": 2, 00:22:23.545 "num_base_bdevs_operational": 2, 00:22:23.545 "process": { 00:22:23.545 "type": "rebuild", 00:22:23.545 "target": "spare", 00:22:23.545 "progress": { 00:22:23.545 "blocks": 63488, 00:22:23.545 "percent": 96 00:22:23.545 } 00:22:23.545 }, 00:22:23.545 "base_bdevs_list": [ 00:22:23.545 { 00:22:23.545 "name": "spare", 00:22:23.545 "uuid": "8bd639d6-7125-519f-b401-e215668c8fa2", 00:22:23.545 "is_configured": true, 00:22:23.545 "data_offset": 0, 00:22:23.545 "data_size": 65536 00:22:23.545 }, 00:22:23.545 { 00:22:23.545 "name": "BaseBdev2", 00:22:23.545 "uuid": "9f86c741-9772-5c6a-8c0d-20f4d6d6c021", 00:22:23.545 "is_configured": true, 00:22:23.545 "data_offset": 0, 00:22:23.545 "data_size": 65536 00:22:23.545 } 00:22:23.545 ] 00:22:23.545 }' 00:22:23.545 10:37:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:23.545 10:37:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:23.545 10:37:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:23.545 [2024-07-25 10:37:27.144165] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:23.545 10:37:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:23.545 10:37:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:23.545 [2024-07-25 10:37:27.244469] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:23.545 [2024-07-25 10:37:27.253311] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.480 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:24.480 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:24.480 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:24.480 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:24.480 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:24.480 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:24.480 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.480 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.738 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:24.738 "name": "raid_bdev1", 00:22:24.738 "uuid": "807f61f2-b7ac-4729-8cb0-18804f70aeaf", 00:22:24.738 "strip_size_kb": 0, 00:22:24.738 "state": "online", 00:22:24.738 "raid_level": "raid1", 00:22:24.738 "superblock": false, 00:22:24.738 "num_base_bdevs": 2, 00:22:24.738 "num_base_bdevs_discovered": 2, 00:22:24.738 "num_base_bdevs_operational": 2, 00:22:24.738 "base_bdevs_list": [ 00:22:24.738 { 00:22:24.738 "name": "spare", 00:22:24.738 "uuid": "8bd639d6-7125-519f-b401-e215668c8fa2", 00:22:24.738 "is_configured": true, 00:22:24.738 "data_offset": 0, 00:22:24.738 "data_size": 65536 00:22:24.738 }, 00:22:24.738 { 00:22:24.738 "name": "BaseBdev2", 00:22:24.738 "uuid": "9f86c741-9772-5c6a-8c0d-20f4d6d6c021", 00:22:24.738 "is_configured": true, 00:22:24.738 "data_offset": 0, 00:22:24.738 "data_size": 65536 00:22:24.738 } 00:22:24.738 ] 00:22:24.738 }' 00:22:24.738 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:24.996 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:24.996 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:24.996 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:24.996 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:22:24.996 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:24.996 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:24.996 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:24.996 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:24.996 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:24.996 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.996 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.254 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:25.254 "name": "raid_bdev1", 00:22:25.254 "uuid": "807f61f2-b7ac-4729-8cb0-18804f70aeaf", 00:22:25.254 "strip_size_kb": 0, 00:22:25.254 "state": "online", 00:22:25.254 "raid_level": "raid1", 00:22:25.254 "superblock": false, 00:22:25.254 "num_base_bdevs": 2, 00:22:25.254 "num_base_bdevs_discovered": 2, 00:22:25.254 "num_base_bdevs_operational": 2, 00:22:25.254 "base_bdevs_list": [ 00:22:25.254 { 00:22:25.254 "name": "spare", 00:22:25.254 "uuid": "8bd639d6-7125-519f-b401-e215668c8fa2", 00:22:25.254 "is_configured": true, 00:22:25.254 "data_offset": 0, 00:22:25.254 "data_size": 65536 00:22:25.254 }, 00:22:25.254 { 00:22:25.254 "name": "BaseBdev2", 00:22:25.255 "uuid": "9f86c741-9772-5c6a-8c0d-20f4d6d6c021", 00:22:25.255 "is_configured": true, 00:22:25.255 "data_offset": 0, 00:22:25.255 "data_size": 65536 00:22:25.255 } 00:22:25.255 ] 00:22:25.255 }' 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.255 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.513 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.513 "name": "raid_bdev1", 00:22:25.513 "uuid": "807f61f2-b7ac-4729-8cb0-18804f70aeaf", 00:22:25.513 "strip_size_kb": 0, 00:22:25.513 "state": "online", 00:22:25.513 "raid_level": "raid1", 00:22:25.513 "superblock": false, 00:22:25.513 "num_base_bdevs": 2, 00:22:25.513 "num_base_bdevs_discovered": 2, 00:22:25.513 "num_base_bdevs_operational": 2, 00:22:25.513 "base_bdevs_list": [ 00:22:25.513 { 00:22:25.513 "name": "spare", 00:22:25.513 "uuid": "8bd639d6-7125-519f-b401-e215668c8fa2", 00:22:25.513 "is_configured": true, 00:22:25.513 "data_offset": 0, 00:22:25.513 "data_size": 65536 00:22:25.513 }, 00:22:25.513 { 00:22:25.513 "name": "BaseBdev2", 00:22:25.513 "uuid": "9f86c741-9772-5c6a-8c0d-20f4d6d6c021", 00:22:25.513 "is_configured": true, 00:22:25.513 "data_offset": 0, 00:22:25.513 "data_size": 65536 00:22:25.513 } 00:22:25.513 ] 00:22:25.513 }' 00:22:25.513 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.513 10:37:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:26.080 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:26.338 [2024-07-25 10:37:29.850607] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:26.338 [2024-07-25 10:37:29.850639] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:26.338 00:22:26.338 Latency(us) 00:22:26.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.338 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:26.338 raid_bdev1 : 11.75 96.66 289.98 0.00 0.00 14227.25 227.56 117285.17 00:22:26.338 =================================================================================================================== 00:22:26.338 Total : 96.66 289.98 0.00 0.00 14227.25 227.56 117285.17 00:22:26.338 [2024-07-25 10:37:29.870050] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.338 [2024-07-25 10:37:29.870082] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:26.338 [2024-07-25 10:37:29.870162] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:26.338 [2024-07-25 10:37:29.870178] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21f2010 name raid_bdev1, state offline 00:22:26.338 0 00:22:26.338 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.338 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:22:26.596 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:26.596 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:26.596 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:26.596 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:26.596 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:26.596 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:26.596 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:26.596 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:26.596 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:26.596 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:22:26.596 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:26.596 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:26.596 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:26.854 /dev/nbd0 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:26.854 1+0 records in 00:22:26.854 1+0 records out 00:22:26.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223446 s, 18.3 MB/s 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:26.854 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:22:27.112 /dev/nbd1 00:22:27.112 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:27.112 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:27.112 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:22:27.112 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:22:27.112 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:27.112 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:27.112 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:22:27.112 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:22:27.112 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:27.112 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:27.112 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:27.112 1+0 records in 00:22:27.112 1+0 records out 00:22:27.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187163 s, 21.9 MB/s 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:27.113 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:27.370 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:27.371 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:27.371 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:27.371 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:27.371 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:27.371 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:27.629 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:27.629 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:27.629 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:27.629 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:27.629 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:27.629 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:27.629 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:27.629 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:27.629 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2438442 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 2438442 ']' 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 2438442 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2438442 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2438442' 00:22:27.887 killing process with pid 2438442 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 2438442 00:22:27.887 Received shutdown signal, test time was about 13.266682 seconds 00:22:27.887 00:22:27.887 Latency(us) 00:22:27.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.887 =================================================================================================================== 00:22:27.887 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.887 [2024-07-25 10:37:31.384607] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:27.887 10:37:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 2438442 00:22:27.887 [2024-07-25 10:37:31.411224] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:22:28.145 00:22:28.145 real 0m18.037s 00:22:28.145 user 0m27.748s 00:22:28.145 sys 0m2.247s 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:28.145 ************************************ 00:22:28.145 END TEST raid_rebuild_test_io 00:22:28.145 ************************************ 00:22:28.145 10:37:31 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:22:28.145 10:37:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:28.145 10:37:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:28.145 10:37:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:28.145 ************************************ 00:22:28.145 START TEST raid_rebuild_test_sb_io 00:22:28.145 ************************************ 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2440797 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2440797 /var/tmp/spdk-raid.sock 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 2440797 ']' 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:28.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.145 10:37:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:28.145 [2024-07-25 10:37:31.800074] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:22:28.146 [2024-07-25 10:37:31.800181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2440797 ] 00:22:28.146 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:28.146 Zero copy mechanism will not be used. 00:22:28.404 [2024-07-25 10:37:31.890231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.404 [2024-07-25 10:37:32.016027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.404 [2024-07-25 10:37:32.088447] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:28.404 [2024-07-25 10:37:32.088494] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:28.662 10:37:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.662 10:37:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:22:28.662 10:37:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:28.662 10:37:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:28.921 BaseBdev1_malloc 00:22:28.921 10:37:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:29.179 [2024-07-25 10:37:32.714344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:29.179 [2024-07-25 10:37:32.714412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.179 [2024-07-25 10:37:32.714445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x136b430 00:22:29.179 [2024-07-25 10:37:32.714461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.179 [2024-07-25 10:37:32.716382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.179 [2024-07-25 10:37:32.716411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:29.179 BaseBdev1 00:22:29.179 10:37:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:29.179 10:37:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:29.437 BaseBdev2_malloc 00:22:29.437 10:37:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:29.696 [2024-07-25 10:37:33.303430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:29.696 [2024-07-25 10:37:33.303506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.696 [2024-07-25 10:37:33.303537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x150ea20 00:22:29.696 [2024-07-25 10:37:33.303551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.696 [2024-07-25 10:37:33.305114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.696 [2024-07-25 10:37:33.305137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:29.696 BaseBdev2 00:22:29.696 10:37:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:29.953 spare_malloc 00:22:29.953 10:37:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:30.211 spare_delay 00:22:30.469 10:37:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:30.726 [2024-07-25 10:37:34.192491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:30.726 [2024-07-25 10:37:34.192559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.726 [2024-07-25 10:37:34.192594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1363070 00:22:30.726 [2024-07-25 10:37:34.192611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.726 [2024-07-25 10:37:34.194445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.726 [2024-07-25 10:37:34.194473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:30.726 spare 00:22:30.726 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:30.985 [2024-07-25 10:37:34.481305] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:30.985 [2024-07-25 10:37:34.482802] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:30.985 [2024-07-25 10:37:34.483013] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1506010 00:22:30.985 [2024-07-25 10:37:34.483032] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:30.985 [2024-07-25 10:37:34.483279] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1362d20 00:22:30.985 [2024-07-25 10:37:34.483474] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1506010 00:22:30.985 [2024-07-25 10:37:34.483497] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1506010 00:22:30.985 [2024-07-25 10:37:34.483642] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.985 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:30.985 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:30.985 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:30.985 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:30.985 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:30.985 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:30.985 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.985 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.985 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.985 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.985 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.985 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.244 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:31.244 "name": "raid_bdev1", 00:22:31.244 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:31.244 "strip_size_kb": 0, 00:22:31.244 "state": "online", 00:22:31.244 "raid_level": "raid1", 00:22:31.244 "superblock": true, 00:22:31.244 "num_base_bdevs": 2, 00:22:31.244 "num_base_bdevs_discovered": 2, 00:22:31.244 "num_base_bdevs_operational": 2, 00:22:31.244 "base_bdevs_list": [ 00:22:31.244 { 00:22:31.244 "name": "BaseBdev1", 00:22:31.244 "uuid": "9025e653-ef07-59a9-a387-c8c640350700", 00:22:31.244 "is_configured": true, 00:22:31.244 "data_offset": 2048, 00:22:31.244 "data_size": 63488 00:22:31.244 }, 00:22:31.244 { 00:22:31.244 "name": "BaseBdev2", 00:22:31.244 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:31.244 "is_configured": true, 00:22:31.244 "data_offset": 2048, 00:22:31.244 "data_size": 63488 00:22:31.244 } 00:22:31.244 ] 00:22:31.244 }' 00:22:31.244 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:31.244 10:37:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:31.810 10:37:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:31.810 10:37:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:22:32.067 [2024-07-25 10:37:35.568388] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:32.067 10:37:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:22:32.067 10:37:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.067 10:37:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:32.325 10:37:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:22:32.325 10:37:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:22:32.325 10:37:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:32.325 10:37:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:32.325 [2024-07-25 10:37:35.947884] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x136c6b0 00:22:32.325 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:32.325 Zero copy mechanism will not be used. 00:22:32.325 Running I/O for 60 seconds... 00:22:32.583 [2024-07-25 10:37:36.104001] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:32.583 [2024-07-25 10:37:36.118592] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x136c6b0 00:22:32.583 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:32.583 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:32.583 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:32.583 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:32.583 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:32.583 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:32.583 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.583 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.583 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.583 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.583 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.583 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.840 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.840 "name": "raid_bdev1", 00:22:32.840 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:32.840 "strip_size_kb": 0, 00:22:32.840 "state": "online", 00:22:32.840 "raid_level": "raid1", 00:22:32.840 "superblock": true, 00:22:32.840 "num_base_bdevs": 2, 00:22:32.840 "num_base_bdevs_discovered": 1, 00:22:32.840 "num_base_bdevs_operational": 1, 00:22:32.840 "base_bdevs_list": [ 00:22:32.840 { 00:22:32.840 "name": null, 00:22:32.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.840 "is_configured": false, 00:22:32.840 "data_offset": 2048, 00:22:32.840 "data_size": 63488 00:22:32.840 }, 00:22:32.840 { 00:22:32.840 "name": "BaseBdev2", 00:22:32.840 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:32.840 "is_configured": true, 00:22:32.840 "data_offset": 2048, 00:22:32.840 "data_size": 63488 00:22:32.840 } 00:22:32.840 ] 00:22:32.840 }' 00:22:32.840 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.840 10:37:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:33.405 10:37:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:33.663 [2024-07-25 10:37:37.333812] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:33.920 [2024-07-25 10:37:37.376886] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1502860 00:22:33.920 [2024-07-25 10:37:37.379070] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:33.920 10:37:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:33.920 [2024-07-25 10:37:37.496071] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:33.920 [2024-07-25 10:37:37.496638] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:34.178 [2024-07-25 10:37:37.706946] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:34.178 [2024-07-25 10:37:37.707205] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:34.436 [2024-07-25 10:37:38.051575] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:34.436 [2024-07-25 10:37:38.051911] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:34.694 [2024-07-25 10:37:38.293680] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:34.694 10:37:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.694 10:37:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:34.694 10:37:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:34.694 10:37:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:34.694 10:37:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:34.694 10:37:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.694 10:37:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.952 10:37:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:34.952 "name": "raid_bdev1", 00:22:34.952 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:34.952 "strip_size_kb": 0, 00:22:34.952 "state": "online", 00:22:34.952 "raid_level": "raid1", 00:22:34.952 "superblock": true, 00:22:34.952 "num_base_bdevs": 2, 00:22:34.952 "num_base_bdevs_discovered": 2, 00:22:34.952 "num_base_bdevs_operational": 2, 00:22:34.952 "process": { 00:22:34.952 "type": "rebuild", 00:22:34.952 "target": "spare", 00:22:34.952 "progress": { 00:22:34.952 "blocks": 12288, 00:22:34.952 "percent": 19 00:22:34.952 } 00:22:34.952 }, 00:22:34.952 "base_bdevs_list": [ 00:22:34.952 { 00:22:34.952 "name": "spare", 00:22:34.952 "uuid": "58be73b5-e46b-557a-96dc-805d1e368848", 00:22:34.952 "is_configured": true, 00:22:34.952 "data_offset": 2048, 00:22:34.952 "data_size": 63488 00:22:34.952 }, 00:22:34.952 { 00:22:34.952 "name": "BaseBdev2", 00:22:34.952 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:34.952 "is_configured": true, 00:22:34.952 "data_offset": 2048, 00:22:34.952 "data_size": 63488 00:22:34.952 } 00:22:34.952 ] 00:22:34.952 }' 00:22:34.952 10:37:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:35.210 10:37:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:35.210 10:37:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:35.210 [2024-07-25 10:37:38.679049] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:35.210 10:37:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:35.210 10:37:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:35.467 [2024-07-25 10:37:38.920869] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:35.468 [2024-07-25 10:37:38.928140] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:35.468 [2024-07-25 10:37:38.928380] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:35.468 [2024-07-25 10:37:39.029907] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:35.468 [2024-07-25 10:37:39.045823] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.468 [2024-07-25 10:37:39.045851] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:35.468 [2024-07-25 10:37:39.045864] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:35.468 [2024-07-25 10:37:39.053723] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x136c6b0 00:22:35.468 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:35.468 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:35.468 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:35.468 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:35.468 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:35.468 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:35.468 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:35.468 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:35.468 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:35.468 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:35.468 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.468 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.726 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:35.726 "name": "raid_bdev1", 00:22:35.726 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:35.726 "strip_size_kb": 0, 00:22:35.726 "state": "online", 00:22:35.726 "raid_level": "raid1", 00:22:35.726 "superblock": true, 00:22:35.726 "num_base_bdevs": 2, 00:22:35.726 "num_base_bdevs_discovered": 1, 00:22:35.726 "num_base_bdevs_operational": 1, 00:22:35.726 "base_bdevs_list": [ 00:22:35.726 { 00:22:35.726 "name": null, 00:22:35.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.726 "is_configured": false, 00:22:35.726 "data_offset": 2048, 00:22:35.726 "data_size": 63488 00:22:35.726 }, 00:22:35.726 { 00:22:35.726 "name": "BaseBdev2", 00:22:35.726 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:35.726 "is_configured": true, 00:22:35.726 "data_offset": 2048, 00:22:35.726 "data_size": 63488 00:22:35.726 } 00:22:35.726 ] 00:22:35.726 }' 00:22:35.726 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:35.726 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:36.291 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:36.291 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:36.291 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:36.291 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:36.291 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:36.291 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.291 10:37:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.549 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:36.549 "name": "raid_bdev1", 00:22:36.549 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:36.549 "strip_size_kb": 0, 00:22:36.549 "state": "online", 00:22:36.549 "raid_level": "raid1", 00:22:36.549 "superblock": true, 00:22:36.549 "num_base_bdevs": 2, 00:22:36.549 "num_base_bdevs_discovered": 1, 00:22:36.549 "num_base_bdevs_operational": 1, 00:22:36.549 "base_bdevs_list": [ 00:22:36.549 { 00:22:36.549 "name": null, 00:22:36.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.549 "is_configured": false, 00:22:36.549 "data_offset": 2048, 00:22:36.549 "data_size": 63488 00:22:36.549 }, 00:22:36.549 { 00:22:36.549 "name": "BaseBdev2", 00:22:36.549 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:36.549 "is_configured": true, 00:22:36.549 "data_offset": 2048, 00:22:36.549 "data_size": 63488 00:22:36.549 } 00:22:36.549 ] 00:22:36.549 }' 00:22:36.549 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:36.549 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:36.549 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:36.806 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:36.806 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:36.806 [2024-07-25 10:37:40.514702] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:37.064 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:37.064 [2024-07-25 10:37:40.564632] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15027e0 00:22:37.064 [2024-07-25 10:37:40.565973] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:37.064 [2024-07-25 10:37:40.666932] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:37.064 [2024-07-25 10:37:40.667253] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:37.629 [2024-07-25 10:37:41.056229] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:37.629 [2024-07-25 10:37:41.181919] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:37.887 [2024-07-25 10:37:41.423912] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:37.887 [2024-07-25 10:37:41.555316] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:37.887 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:37.887 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:37.887 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:37.887 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:37.887 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:37.887 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.887 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.145 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:38.145 "name": "raid_bdev1", 00:22:38.145 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:38.145 "strip_size_kb": 0, 00:22:38.145 "state": "online", 00:22:38.145 "raid_level": "raid1", 00:22:38.145 "superblock": true, 00:22:38.145 "num_base_bdevs": 2, 00:22:38.145 "num_base_bdevs_discovered": 2, 00:22:38.145 "num_base_bdevs_operational": 2, 00:22:38.145 "process": { 00:22:38.145 "type": "rebuild", 00:22:38.145 "target": "spare", 00:22:38.145 "progress": { 00:22:38.145 "blocks": 18432, 00:22:38.145 "percent": 29 00:22:38.145 } 00:22:38.145 }, 00:22:38.145 "base_bdevs_list": [ 00:22:38.145 { 00:22:38.145 "name": "spare", 00:22:38.145 "uuid": "58be73b5-e46b-557a-96dc-805d1e368848", 00:22:38.145 "is_configured": true, 00:22:38.145 "data_offset": 2048, 00:22:38.145 "data_size": 63488 00:22:38.145 }, 00:22:38.145 { 00:22:38.145 "name": "BaseBdev2", 00:22:38.145 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:38.145 "is_configured": true, 00:22:38.145 "data_offset": 2048, 00:22:38.145 "data_size": 63488 00:22:38.145 } 00:22:38.145 ] 00:22:38.145 }' 00:22:38.145 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:38.145 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:38.403 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:38.403 [2024-07-25 10:37:41.884268] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:38.403 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:38.403 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:22:38.403 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:22:38.403 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:22:38.403 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:38.403 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:38.404 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:38.404 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=825 00:22:38.404 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:38.404 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:38.404 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:38.404 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:38.404 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:38.404 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:38.404 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.404 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.404 [2024-07-25 10:37:42.019535] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:38.662 10:37:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:38.662 "name": "raid_bdev1", 00:22:38.662 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:38.662 "strip_size_kb": 0, 00:22:38.662 "state": "online", 00:22:38.662 "raid_level": "raid1", 00:22:38.662 "superblock": true, 00:22:38.662 "num_base_bdevs": 2, 00:22:38.662 "num_base_bdevs_discovered": 2, 00:22:38.662 "num_base_bdevs_operational": 2, 00:22:38.662 "process": { 00:22:38.662 "type": "rebuild", 00:22:38.662 "target": "spare", 00:22:38.662 "progress": { 00:22:38.662 "blocks": 22528, 00:22:38.662 "percent": 35 00:22:38.662 } 00:22:38.662 }, 00:22:38.662 "base_bdevs_list": [ 00:22:38.662 { 00:22:38.662 "name": "spare", 00:22:38.662 "uuid": "58be73b5-e46b-557a-96dc-805d1e368848", 00:22:38.662 "is_configured": true, 00:22:38.662 "data_offset": 2048, 00:22:38.662 "data_size": 63488 00:22:38.662 }, 00:22:38.662 { 00:22:38.662 "name": "BaseBdev2", 00:22:38.662 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:38.662 "is_configured": true, 00:22:38.662 "data_offset": 2048, 00:22:38.662 "data_size": 63488 00:22:38.662 } 00:22:38.662 ] 00:22:38.662 }' 00:22:38.662 10:37:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:38.662 10:37:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:38.662 10:37:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:38.662 10:37:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:38.662 10:37:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:38.662 [2024-07-25 10:37:42.268055] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:38.920 [2024-07-25 10:37:42.494381] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:39.485 [2024-07-25 10:37:43.095358] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:39.744 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:39.744 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:39.744 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:39.744 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:39.744 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:39.744 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:39.744 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.744 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.019 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:40.019 "name": "raid_bdev1", 00:22:40.019 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:40.019 "strip_size_kb": 0, 00:22:40.019 "state": "online", 00:22:40.019 "raid_level": "raid1", 00:22:40.019 "superblock": true, 00:22:40.019 "num_base_bdevs": 2, 00:22:40.019 "num_base_bdevs_discovered": 2, 00:22:40.019 "num_base_bdevs_operational": 2, 00:22:40.019 "process": { 00:22:40.019 "type": "rebuild", 00:22:40.019 "target": "spare", 00:22:40.019 "progress": { 00:22:40.019 "blocks": 45056, 00:22:40.019 "percent": 70 00:22:40.019 } 00:22:40.019 }, 00:22:40.019 "base_bdevs_list": [ 00:22:40.019 { 00:22:40.019 "name": "spare", 00:22:40.019 "uuid": "58be73b5-e46b-557a-96dc-805d1e368848", 00:22:40.019 "is_configured": true, 00:22:40.019 "data_offset": 2048, 00:22:40.019 "data_size": 63488 00:22:40.019 }, 00:22:40.019 { 00:22:40.019 "name": "BaseBdev2", 00:22:40.019 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:40.019 "is_configured": true, 00:22:40.019 "data_offset": 2048, 00:22:40.019 "data_size": 63488 00:22:40.019 } 00:22:40.019 ] 00:22:40.019 }' 00:22:40.019 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:40.019 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:40.019 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:40.019 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.019 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:40.295 [2024-07-25 10:37:43.750661] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:40.295 [2024-07-25 10:37:43.966229] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:40.861 [2024-07-25 10:37:44.525735] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:41.119 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:41.119 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:41.119 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:41.119 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:41.119 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:41.119 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:41.119 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.119 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.119 [2024-07-25 10:37:44.633174] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:41.119 [2024-07-25 10:37:44.635168] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.377 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:41.377 "name": "raid_bdev1", 00:22:41.377 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:41.377 "strip_size_kb": 0, 00:22:41.377 "state": "online", 00:22:41.377 "raid_level": "raid1", 00:22:41.377 "superblock": true, 00:22:41.377 "num_base_bdevs": 2, 00:22:41.377 "num_base_bdevs_discovered": 2, 00:22:41.377 "num_base_bdevs_operational": 2, 00:22:41.377 "base_bdevs_list": [ 00:22:41.377 { 00:22:41.377 "name": "spare", 00:22:41.377 "uuid": "58be73b5-e46b-557a-96dc-805d1e368848", 00:22:41.377 "is_configured": true, 00:22:41.377 "data_offset": 2048, 00:22:41.377 "data_size": 63488 00:22:41.377 }, 00:22:41.377 { 00:22:41.377 "name": "BaseBdev2", 00:22:41.377 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:41.377 "is_configured": true, 00:22:41.377 "data_offset": 2048, 00:22:41.377 "data_size": 63488 00:22:41.377 } 00:22:41.377 ] 00:22:41.377 }' 00:22:41.377 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:41.377 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:41.377 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:41.377 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:41.377 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:22:41.377 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:41.377 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:41.378 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:41.378 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:41.378 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:41.378 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.378 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.636 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:41.636 "name": "raid_bdev1", 00:22:41.636 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:41.636 "strip_size_kb": 0, 00:22:41.636 "state": "online", 00:22:41.636 "raid_level": "raid1", 00:22:41.636 "superblock": true, 00:22:41.636 "num_base_bdevs": 2, 00:22:41.636 "num_base_bdevs_discovered": 2, 00:22:41.636 "num_base_bdevs_operational": 2, 00:22:41.636 "base_bdevs_list": [ 00:22:41.636 { 00:22:41.636 "name": "spare", 00:22:41.636 "uuid": "58be73b5-e46b-557a-96dc-805d1e368848", 00:22:41.636 "is_configured": true, 00:22:41.636 "data_offset": 2048, 00:22:41.636 "data_size": 63488 00:22:41.636 }, 00:22:41.636 { 00:22:41.636 "name": "BaseBdev2", 00:22:41.636 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:41.636 "is_configured": true, 00:22:41.636 "data_offset": 2048, 00:22:41.636 "data_size": 63488 00:22:41.636 } 00:22:41.636 ] 00:22:41.636 }' 00:22:41.636 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:41.636 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:41.636 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:41.636 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:41.636 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:41.636 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:41.636 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:41.636 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:41.636 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:41.637 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:41.637 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:41.637 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:41.637 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:41.637 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:41.637 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.637 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.895 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:41.895 "name": "raid_bdev1", 00:22:41.895 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:41.895 "strip_size_kb": 0, 00:22:41.895 "state": "online", 00:22:41.895 "raid_level": "raid1", 00:22:41.895 "superblock": true, 00:22:41.895 "num_base_bdevs": 2, 00:22:41.895 "num_base_bdevs_discovered": 2, 00:22:41.895 "num_base_bdevs_operational": 2, 00:22:41.895 "base_bdevs_list": [ 00:22:41.895 { 00:22:41.895 "name": "spare", 00:22:41.895 "uuid": "58be73b5-e46b-557a-96dc-805d1e368848", 00:22:41.895 "is_configured": true, 00:22:41.895 "data_offset": 2048, 00:22:41.895 "data_size": 63488 00:22:41.895 }, 00:22:41.895 { 00:22:41.895 "name": "BaseBdev2", 00:22:41.895 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:41.895 "is_configured": true, 00:22:41.895 "data_offset": 2048, 00:22:41.895 "data_size": 63488 00:22:41.895 } 00:22:41.895 ] 00:22:41.895 }' 00:22:41.895 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:41.895 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:42.461 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:42.719 [2024-07-25 10:37:46.277644] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:42.719 [2024-07-25 10:37:46.277698] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:42.719 00:22:42.719 Latency(us) 00:22:42.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.719 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:42.719 raid_bdev1 : 10.40 112.38 337.14 0.00 0.00 11412.76 227.56 117285.17 00:22:42.719 =================================================================================================================== 00:22:42.719 Total : 112.38 337.14 0.00 0.00 11412.76 227.56 117285.17 00:22:42.719 [2024-07-25 10:37:46.381816] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.719 [2024-07-25 10:37:46.381852] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:42.719 [2024-07-25 10:37:46.381931] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:42.719 [2024-07-25 10:37:46.381948] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1506010 name raid_bdev1, state offline 00:22:42.719 0 00:22:42.719 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.719 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:22:42.977 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:42.977 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:42.977 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:42.977 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:42.977 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:42.977 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:42.977 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:42.977 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:42.977 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:42.977 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:22:42.977 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:42.977 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:42.977 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:43.236 /dev/nbd0 00:22:43.236 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:43.236 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:43.236 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:43.236 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:22:43.236 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:43.236 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:43.236 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:43.236 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:22:43.236 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:43.236 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:43.236 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:43.494 1+0 records in 00:22:43.494 1+0 records out 00:22:43.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242403 s, 16.9 MB/s 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:43.494 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:22:43.752 /dev/nbd1 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:43.752 1+0 records in 00:22:43.752 1+0 records out 00:22:43.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246393 s, 16.6 MB/s 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:43.752 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:44.011 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:44.269 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:44.269 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:44.269 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:44.269 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:44.269 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:44.269 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:44.269 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:22:44.269 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:44.269 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:22:44.269 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:44.526 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:44.784 [2024-07-25 10:37:48.378616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:44.784 [2024-07-25 10:37:48.378668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.784 [2024-07-25 10:37:48.378694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x136aa80 00:22:44.784 [2024-07-25 10:37:48.378710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.784 [2024-07-25 10:37:48.380458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.784 [2024-07-25 10:37:48.380486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:44.785 [2024-07-25 10:37:48.380576] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:44.785 [2024-07-25 10:37:48.380617] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:44.785 [2024-07-25 10:37:48.380746] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:44.785 spare 00:22:44.785 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:44.785 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:44.785 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:44.785 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:44.785 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:44.785 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:44.785 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.785 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.785 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.785 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.785 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.785 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.785 [2024-07-25 10:37:48.481083] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1501110 00:22:44.785 [2024-07-25 10:37:48.481119] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:44.785 [2024-07-25 10:37:48.481309] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x150ecb0 00:22:44.785 [2024-07-25 10:37:48.481476] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1501110 00:22:44.785 [2024-07-25 10:37:48.481493] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1501110 00:22:44.785 [2024-07-25 10:37:48.481607] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.043 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.043 "name": "raid_bdev1", 00:22:45.043 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:45.043 "strip_size_kb": 0, 00:22:45.043 "state": "online", 00:22:45.043 "raid_level": "raid1", 00:22:45.043 "superblock": true, 00:22:45.043 "num_base_bdevs": 2, 00:22:45.043 "num_base_bdevs_discovered": 2, 00:22:45.043 "num_base_bdevs_operational": 2, 00:22:45.043 "base_bdevs_list": [ 00:22:45.043 { 00:22:45.043 "name": "spare", 00:22:45.043 "uuid": "58be73b5-e46b-557a-96dc-805d1e368848", 00:22:45.043 "is_configured": true, 00:22:45.043 "data_offset": 2048, 00:22:45.043 "data_size": 63488 00:22:45.043 }, 00:22:45.043 { 00:22:45.043 "name": "BaseBdev2", 00:22:45.043 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:45.043 "is_configured": true, 00:22:45.043 "data_offset": 2048, 00:22:45.043 "data_size": 63488 00:22:45.043 } 00:22:45.043 ] 00:22:45.043 }' 00:22:45.043 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.043 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:45.608 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:45.608 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:45.608 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:45.608 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:45.608 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:45.608 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.608 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.866 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:45.866 "name": "raid_bdev1", 00:22:45.866 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:45.866 "strip_size_kb": 0, 00:22:45.866 "state": "online", 00:22:45.866 "raid_level": "raid1", 00:22:45.866 "superblock": true, 00:22:45.866 "num_base_bdevs": 2, 00:22:45.866 "num_base_bdevs_discovered": 2, 00:22:45.866 "num_base_bdevs_operational": 2, 00:22:45.866 "base_bdevs_list": [ 00:22:45.866 { 00:22:45.866 "name": "spare", 00:22:45.866 "uuid": "58be73b5-e46b-557a-96dc-805d1e368848", 00:22:45.866 "is_configured": true, 00:22:45.866 "data_offset": 2048, 00:22:45.866 "data_size": 63488 00:22:45.866 }, 00:22:45.866 { 00:22:45.866 "name": "BaseBdev2", 00:22:45.866 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:45.866 "is_configured": true, 00:22:45.866 "data_offset": 2048, 00:22:45.866 "data_size": 63488 00:22:45.866 } 00:22:45.866 ] 00:22:45.866 }' 00:22:45.866 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:45.866 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:45.866 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:45.866 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:45.866 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:45.866 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.125 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:22:46.125 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:46.693 [2024-07-25 10:37:50.107613] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:46.693 "name": "raid_bdev1", 00:22:46.693 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:46.693 "strip_size_kb": 0, 00:22:46.693 "state": "online", 00:22:46.693 "raid_level": "raid1", 00:22:46.693 "superblock": true, 00:22:46.693 "num_base_bdevs": 2, 00:22:46.693 "num_base_bdevs_discovered": 1, 00:22:46.693 "num_base_bdevs_operational": 1, 00:22:46.693 "base_bdevs_list": [ 00:22:46.693 { 00:22:46.693 "name": null, 00:22:46.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.693 "is_configured": false, 00:22:46.693 "data_offset": 2048, 00:22:46.693 "data_size": 63488 00:22:46.693 }, 00:22:46.693 { 00:22:46.693 "name": "BaseBdev2", 00:22:46.693 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:46.693 "is_configured": true, 00:22:46.693 "data_offset": 2048, 00:22:46.693 "data_size": 63488 00:22:46.693 } 00:22:46.693 ] 00:22:46.693 }' 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:46.693 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:47.258 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:47.517 [2024-07-25 10:37:51.154537] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:47.517 [2024-07-25 10:37:51.154740] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:47.517 [2024-07-25 10:37:51.154762] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:47.517 [2024-07-25 10:37:51.154794] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:47.517 [2024-07-25 10:37:51.161779] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x150ecb0 00:22:47.517 [2024-07-25 10:37:51.163741] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:47.517 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:22:48.889 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.889 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:48.889 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:48.889 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:48.889 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:48.889 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.889 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.889 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:48.889 "name": "raid_bdev1", 00:22:48.889 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:48.889 "strip_size_kb": 0, 00:22:48.889 "state": "online", 00:22:48.889 "raid_level": "raid1", 00:22:48.889 "superblock": true, 00:22:48.889 "num_base_bdevs": 2, 00:22:48.889 "num_base_bdevs_discovered": 2, 00:22:48.889 "num_base_bdevs_operational": 2, 00:22:48.889 "process": { 00:22:48.889 "type": "rebuild", 00:22:48.889 "target": "spare", 00:22:48.889 "progress": { 00:22:48.890 "blocks": 24576, 00:22:48.890 "percent": 38 00:22:48.890 } 00:22:48.890 }, 00:22:48.890 "base_bdevs_list": [ 00:22:48.890 { 00:22:48.890 "name": "spare", 00:22:48.890 "uuid": "58be73b5-e46b-557a-96dc-805d1e368848", 00:22:48.890 "is_configured": true, 00:22:48.890 "data_offset": 2048, 00:22:48.890 "data_size": 63488 00:22:48.890 }, 00:22:48.890 { 00:22:48.890 "name": "BaseBdev2", 00:22:48.890 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:48.890 "is_configured": true, 00:22:48.890 "data_offset": 2048, 00:22:48.890 "data_size": 63488 00:22:48.890 } 00:22:48.890 ] 00:22:48.890 }' 00:22:48.890 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:48.890 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.890 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:48.890 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.890 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:49.148 [2024-07-25 10:37:52.743291] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:49.148 [2024-07-25 10:37:52.777245] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:49.148 [2024-07-25 10:37:52.777297] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.148 [2024-07-25 10:37:52.777316] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:49.148 [2024-07-25 10:37:52.777325] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:49.148 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:49.148 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:49.148 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:49.148 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:49.148 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:49.148 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:49.148 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.148 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.148 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.148 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.148 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.148 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.406 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:49.406 "name": "raid_bdev1", 00:22:49.406 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:49.406 "strip_size_kb": 0, 00:22:49.406 "state": "online", 00:22:49.406 "raid_level": "raid1", 00:22:49.406 "superblock": true, 00:22:49.406 "num_base_bdevs": 2, 00:22:49.406 "num_base_bdevs_discovered": 1, 00:22:49.406 "num_base_bdevs_operational": 1, 00:22:49.406 "base_bdevs_list": [ 00:22:49.406 { 00:22:49.406 "name": null, 00:22:49.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.406 "is_configured": false, 00:22:49.406 "data_offset": 2048, 00:22:49.406 "data_size": 63488 00:22:49.406 }, 00:22:49.406 { 00:22:49.406 "name": "BaseBdev2", 00:22:49.406 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:49.406 "is_configured": true, 00:22:49.406 "data_offset": 2048, 00:22:49.406 "data_size": 63488 00:22:49.406 } 00:22:49.406 ] 00:22:49.406 }' 00:22:49.406 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:49.406 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:49.970 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:50.226 [2024-07-25 10:37:53.858329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:50.226 [2024-07-25 10:37:53.858386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:50.226 [2024-07-25 10:37:53.858420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1365d70 00:22:50.226 [2024-07-25 10:37:53.858436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:50.226 [2024-07-25 10:37:53.858877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:50.226 [2024-07-25 10:37:53.858904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:50.226 [2024-07-25 10:37:53.859001] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:50.227 [2024-07-25 10:37:53.859021] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:50.227 [2024-07-25 10:37:53.859032] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:50.227 [2024-07-25 10:37:53.859057] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:50.227 [2024-07-25 10:37:53.866121] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x136cc60 00:22:50.227 spare 00:22:50.227 [2024-07-25 10:37:53.867669] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:50.227 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:22:51.597 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.597 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:51.597 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:51.597 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:51.597 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:51.597 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.597 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.597 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:51.597 "name": "raid_bdev1", 00:22:51.597 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:51.597 "strip_size_kb": 0, 00:22:51.597 "state": "online", 00:22:51.597 "raid_level": "raid1", 00:22:51.597 "superblock": true, 00:22:51.597 "num_base_bdevs": 2, 00:22:51.597 "num_base_bdevs_discovered": 2, 00:22:51.597 "num_base_bdevs_operational": 2, 00:22:51.597 "process": { 00:22:51.597 "type": "rebuild", 00:22:51.597 "target": "spare", 00:22:51.597 "progress": { 00:22:51.597 "blocks": 24576, 00:22:51.597 "percent": 38 00:22:51.597 } 00:22:51.597 }, 00:22:51.597 "base_bdevs_list": [ 00:22:51.597 { 00:22:51.597 "name": "spare", 00:22:51.597 "uuid": "58be73b5-e46b-557a-96dc-805d1e368848", 00:22:51.597 "is_configured": true, 00:22:51.597 "data_offset": 2048, 00:22:51.597 "data_size": 63488 00:22:51.597 }, 00:22:51.597 { 00:22:51.597 "name": "BaseBdev2", 00:22:51.597 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:51.597 "is_configured": true, 00:22:51.597 "data_offset": 2048, 00:22:51.597 "data_size": 63488 00:22:51.597 } 00:22:51.597 ] 00:22:51.597 }' 00:22:51.597 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:51.597 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.597 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:51.597 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.597 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:51.855 [2024-07-25 10:37:55.478588] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:51.855 [2024-07-25 10:37:55.481106] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:51.855 [2024-07-25 10:37:55.481164] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.855 [2024-07-25 10:37:55.481186] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:51.855 [2024-07-25 10:37:55.481197] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:51.855 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:51.855 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:51.855 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:51.855 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:51.855 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:51.855 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:51.855 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.855 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.855 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.855 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.855 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.855 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.113 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.113 "name": "raid_bdev1", 00:22:52.113 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:52.113 "strip_size_kb": 0, 00:22:52.113 "state": "online", 00:22:52.113 "raid_level": "raid1", 00:22:52.113 "superblock": true, 00:22:52.113 "num_base_bdevs": 2, 00:22:52.113 "num_base_bdevs_discovered": 1, 00:22:52.113 "num_base_bdevs_operational": 1, 00:22:52.113 "base_bdevs_list": [ 00:22:52.113 { 00:22:52.113 "name": null, 00:22:52.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.113 "is_configured": false, 00:22:52.113 "data_offset": 2048, 00:22:52.113 "data_size": 63488 00:22:52.113 }, 00:22:52.113 { 00:22:52.113 "name": "BaseBdev2", 00:22:52.113 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:52.113 "is_configured": true, 00:22:52.113 "data_offset": 2048, 00:22:52.113 "data_size": 63488 00:22:52.113 } 00:22:52.113 ] 00:22:52.113 }' 00:22:52.113 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.113 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:52.678 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:52.678 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:52.678 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:52.678 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:52.678 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:52.678 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.678 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.935 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:52.935 "name": "raid_bdev1", 00:22:52.935 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:52.935 "strip_size_kb": 0, 00:22:52.935 "state": "online", 00:22:52.935 "raid_level": "raid1", 00:22:52.935 "superblock": true, 00:22:52.935 "num_base_bdevs": 2, 00:22:52.935 "num_base_bdevs_discovered": 1, 00:22:52.935 "num_base_bdevs_operational": 1, 00:22:52.935 "base_bdevs_list": [ 00:22:52.935 { 00:22:52.935 "name": null, 00:22:52.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.935 "is_configured": false, 00:22:52.935 "data_offset": 2048, 00:22:52.935 "data_size": 63488 00:22:52.935 }, 00:22:52.935 { 00:22:52.935 "name": "BaseBdev2", 00:22:52.935 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:52.935 "is_configured": true, 00:22:52.935 "data_offset": 2048, 00:22:52.935 "data_size": 63488 00:22:52.935 } 00:22:52.935 ] 00:22:52.935 }' 00:22:52.935 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:53.192 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:53.192 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:53.192 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:53.192 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:53.450 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:53.707 [2024-07-25 10:37:57.163753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:53.707 [2024-07-25 10:37:57.163813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.707 [2024-07-25 10:37:57.163839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x136b660 00:22:53.707 [2024-07-25 10:37:57.163852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.707 [2024-07-25 10:37:57.164266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.707 [2024-07-25 10:37:57.164287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:53.707 [2024-07-25 10:37:57.164396] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:53.708 [2024-07-25 10:37:57.164413] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:53.708 [2024-07-25 10:37:57.164421] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:53.708 BaseBdev1 00:22:53.708 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:22:54.641 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:54.641 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:54.641 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:54.641 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:54.641 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:54.641 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:54.641 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:54.641 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:54.641 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:54.641 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:54.641 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.641 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.899 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:54.899 "name": "raid_bdev1", 00:22:54.899 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:54.899 "strip_size_kb": 0, 00:22:54.899 "state": "online", 00:22:54.899 "raid_level": "raid1", 00:22:54.899 "superblock": true, 00:22:54.899 "num_base_bdevs": 2, 00:22:54.900 "num_base_bdevs_discovered": 1, 00:22:54.900 "num_base_bdevs_operational": 1, 00:22:54.900 "base_bdevs_list": [ 00:22:54.900 { 00:22:54.900 "name": null, 00:22:54.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.900 "is_configured": false, 00:22:54.900 "data_offset": 2048, 00:22:54.900 "data_size": 63488 00:22:54.900 }, 00:22:54.900 { 00:22:54.900 "name": "BaseBdev2", 00:22:54.900 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:54.900 "is_configured": true, 00:22:54.900 "data_offset": 2048, 00:22:54.900 "data_size": 63488 00:22:54.900 } 00:22:54.900 ] 00:22:54.900 }' 00:22:54.900 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:54.900 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:55.465 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:55.465 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:55.465 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:55.465 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:55.465 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:55.465 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.465 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:55.724 "name": "raid_bdev1", 00:22:55.724 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:55.724 "strip_size_kb": 0, 00:22:55.724 "state": "online", 00:22:55.724 "raid_level": "raid1", 00:22:55.724 "superblock": true, 00:22:55.724 "num_base_bdevs": 2, 00:22:55.724 "num_base_bdevs_discovered": 1, 00:22:55.724 "num_base_bdevs_operational": 1, 00:22:55.724 "base_bdevs_list": [ 00:22:55.724 { 00:22:55.724 "name": null, 00:22:55.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.724 "is_configured": false, 00:22:55.724 "data_offset": 2048, 00:22:55.724 "data_size": 63488 00:22:55.724 }, 00:22:55.724 { 00:22:55.724 "name": "BaseBdev2", 00:22:55.724 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:55.724 "is_configured": true, 00:22:55.724 "data_offset": 2048, 00:22:55.724 "data_size": 63488 00:22:55.724 } 00:22:55.724 ] 00:22:55.724 }' 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:55.724 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:55.982 [2024-07-25 10:37:59.594531] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:55.982 [2024-07-25 10:37:59.594688] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:55.982 [2024-07-25 10:37:59.594704] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:55.982 request: 00:22:55.982 { 00:22:55.982 "base_bdev": "BaseBdev1", 00:22:55.982 "raid_bdev": "raid_bdev1", 00:22:55.982 "method": "bdev_raid_add_base_bdev", 00:22:55.982 "req_id": 1 00:22:55.982 } 00:22:55.982 Got JSON-RPC error response 00:22:55.982 response: 00:22:55.982 { 00:22:55.982 "code": -22, 00:22:55.982 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:55.982 } 00:22:55.982 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:22:55.982 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:55.982 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:55.982 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:55.982 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:22:56.917 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:56.917 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:56.917 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:56.917 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:56.917 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:56.917 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:56.917 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:56.917 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:56.917 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:56.917 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:56.917 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.917 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.175 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:57.175 "name": "raid_bdev1", 00:22:57.175 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:57.175 "strip_size_kb": 0, 00:22:57.175 "state": "online", 00:22:57.175 "raid_level": "raid1", 00:22:57.175 "superblock": true, 00:22:57.175 "num_base_bdevs": 2, 00:22:57.175 "num_base_bdevs_discovered": 1, 00:22:57.175 "num_base_bdevs_operational": 1, 00:22:57.175 "base_bdevs_list": [ 00:22:57.175 { 00:22:57.175 "name": null, 00:22:57.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.175 "is_configured": false, 00:22:57.175 "data_offset": 2048, 00:22:57.175 "data_size": 63488 00:22:57.175 }, 00:22:57.175 { 00:22:57.175 "name": "BaseBdev2", 00:22:57.175 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:57.175 "is_configured": true, 00:22:57.175 "data_offset": 2048, 00:22:57.175 "data_size": 63488 00:22:57.175 } 00:22:57.175 ] 00:22:57.175 }' 00:22:57.175 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:57.175 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:57.741 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:57.741 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:57.741 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:57.741 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:57.741 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:57.741 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.741 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.999 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:57.999 "name": "raid_bdev1", 00:22:57.999 "uuid": "1662a642-533b-4d60-bc68-b9dd0fccf57d", 00:22:57.999 "strip_size_kb": 0, 00:22:57.999 "state": "online", 00:22:57.999 "raid_level": "raid1", 00:22:57.999 "superblock": true, 00:22:57.999 "num_base_bdevs": 2, 00:22:57.999 "num_base_bdevs_discovered": 1, 00:22:57.999 "num_base_bdevs_operational": 1, 00:22:57.999 "base_bdevs_list": [ 00:22:57.999 { 00:22:57.999 "name": null, 00:22:57.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.000 "is_configured": false, 00:22:58.000 "data_offset": 2048, 00:22:58.000 "data_size": 63488 00:22:58.000 }, 00:22:58.000 { 00:22:58.000 "name": "BaseBdev2", 00:22:58.000 "uuid": "589599e5-262b-54a7-8483-91362088708b", 00:22:58.000 "is_configured": true, 00:22:58.000 "data_offset": 2048, 00:22:58.000 "data_size": 63488 00:22:58.000 } 00:22:58.000 ] 00:22:58.000 }' 00:22:58.000 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:58.000 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:58.000 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:58.258 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:58.258 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2440797 00:22:58.258 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 2440797 ']' 00:22:58.258 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 2440797 00:22:58.258 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:22:58.258 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:58.258 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2440797 00:22:58.258 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:58.258 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:58.258 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2440797' 00:22:58.258 killing process with pid 2440797 00:22:58.258 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 2440797 00:22:58.258 Received shutdown signal, test time was about 25.735256 seconds 00:22:58.258 00:22:58.258 Latency(us) 00:22:58.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.258 =================================================================================================================== 00:22:58.258 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:58.258 [2024-07-25 10:38:01.748418] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:58.258 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 2440797 00:22:58.258 [2024-07-25 10:38:01.748562] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:58.258 [2024-07-25 10:38:01.748622] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:58.258 [2024-07-25 10:38:01.748635] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1501110 name raid_bdev1, state offline 00:22:58.258 [2024-07-25 10:38:01.776781] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:22:58.524 00:22:58.524 real 0m30.307s 00:22:58.524 user 0m48.659s 00:22:58.524 sys 0m3.640s 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:58.524 ************************************ 00:22:58.524 END TEST raid_rebuild_test_sb_io 00:22:58.524 ************************************ 00:22:58.524 10:38:02 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:22:58.524 10:38:02 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:22:58.524 10:38:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:58.524 10:38:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:58.524 10:38:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:58.524 ************************************ 00:22:58.524 START TEST raid_rebuild_test 00:22:58.524 ************************************ 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2444774 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2444774 /var/tmp/spdk-raid.sock 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 2444774 ']' 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:58.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.524 10:38:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.524 [2024-07-25 10:38:02.164838] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:22:58.524 [2024-07-25 10:38:02.164922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444774 ] 00:22:58.524 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:58.524 Zero copy mechanism will not be used. 00:22:58.864 [2024-07-25 10:38:02.255835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.864 [2024-07-25 10:38:02.379273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.864 [2024-07-25 10:38:02.455411] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:58.864 [2024-07-25 10:38:02.455455] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:58.864 10:38:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.864 10:38:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:22:58.864 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:58.864 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:59.123 BaseBdev1_malloc 00:22:59.123 10:38:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:59.380 [2024-07-25 10:38:03.081022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:59.380 [2024-07-25 10:38:03.081094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.380 [2024-07-25 10:38:03.081150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc7f430 00:22:59.380 [2024-07-25 10:38:03.081167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.380 [2024-07-25 10:38:03.082839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.380 [2024-07-25 10:38:03.082868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:59.380 BaseBdev1 00:22:59.637 10:38:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:59.637 10:38:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:59.637 BaseBdev2_malloc 00:22:59.893 10:38:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:00.151 [2024-07-25 10:38:03.621635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:00.151 [2024-07-25 10:38:03.621699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.151 [2024-07-25 10:38:03.621733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe22a20 00:23:00.151 [2024-07-25 10:38:03.621750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.151 [2024-07-25 10:38:03.623531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.151 [2024-07-25 10:38:03.623559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:00.151 BaseBdev2 00:23:00.151 10:38:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:00.151 10:38:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:00.409 BaseBdev3_malloc 00:23:00.410 10:38:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:00.668 [2024-07-25 10:38:04.215582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:00.668 [2024-07-25 10:38:04.215652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.668 [2024-07-25 10:38:04.215682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe19700 00:23:00.668 [2024-07-25 10:38:04.215698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.668 [2024-07-25 10:38:04.217503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.668 [2024-07-25 10:38:04.217531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:00.668 BaseBdev3 00:23:00.668 10:38:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:00.668 10:38:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:00.927 BaseBdev4_malloc 00:23:00.927 10:38:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:01.185 [2024-07-25 10:38:04.805340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:01.185 [2024-07-25 10:38:04.805411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.185 [2024-07-25 10:38:04.805442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe17290 00:23:01.185 [2024-07-25 10:38:04.805458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.185 [2024-07-25 10:38:04.807292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.185 [2024-07-25 10:38:04.807321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:01.185 BaseBdev4 00:23:01.185 10:38:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:01.444 spare_malloc 00:23:01.444 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:01.702 spare_delay 00:23:01.702 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:01.960 [2024-07-25 10:38:05.562151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:01.960 [2024-07-25 10:38:05.562209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.960 [2024-07-25 10:38:05.562237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc78330 00:23:01.960 [2024-07-25 10:38:05.562252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.960 [2024-07-25 10:38:05.563930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.960 [2024-07-25 10:38:05.563959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:01.960 spare 00:23:01.960 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:02.219 [2024-07-25 10:38:05.802819] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:02.219 [2024-07-25 10:38:05.804039] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:02.219 [2024-07-25 10:38:05.804119] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:02.219 [2024-07-25 10:38:05.804180] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:02.219 [2024-07-25 10:38:05.804270] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xc79a10 00:23:02.219 [2024-07-25 10:38:05.804283] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:02.219 [2024-07-25 10:38:05.804509] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc963f0 00:23:02.219 [2024-07-25 10:38:05.804671] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc79a10 00:23:02.219 [2024-07-25 10:38:05.804684] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc79a10 00:23:02.219 [2024-07-25 10:38:05.804826] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:02.219 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:02.219 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:02.219 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:02.219 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:02.219 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:02.219 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:02.219 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:02.219 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:02.219 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:02.219 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:02.219 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.219 10:38:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.478 10:38:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.478 "name": "raid_bdev1", 00:23:02.478 "uuid": "2e47878f-27fb-4d1c-8a87-2605658adede", 00:23:02.478 "strip_size_kb": 0, 00:23:02.478 "state": "online", 00:23:02.478 "raid_level": "raid1", 00:23:02.478 "superblock": false, 00:23:02.478 "num_base_bdevs": 4, 00:23:02.478 "num_base_bdevs_discovered": 4, 00:23:02.478 "num_base_bdevs_operational": 4, 00:23:02.478 "base_bdevs_list": [ 00:23:02.478 { 00:23:02.478 "name": "BaseBdev1", 00:23:02.478 "uuid": "71d5c733-cd4d-5f95-8685-3582af056576", 00:23:02.478 "is_configured": true, 00:23:02.478 "data_offset": 0, 00:23:02.478 "data_size": 65536 00:23:02.478 }, 00:23:02.478 { 00:23:02.478 "name": "BaseBdev2", 00:23:02.478 "uuid": "7b9896aa-1fd3-59ad-bf6e-a4c797fc159c", 00:23:02.478 "is_configured": true, 00:23:02.478 "data_offset": 0, 00:23:02.478 "data_size": 65536 00:23:02.478 }, 00:23:02.478 { 00:23:02.478 "name": "BaseBdev3", 00:23:02.478 "uuid": "6bd62807-d717-521b-9720-951f56594793", 00:23:02.478 "is_configured": true, 00:23:02.478 "data_offset": 0, 00:23:02.478 "data_size": 65536 00:23:02.478 }, 00:23:02.478 { 00:23:02.478 "name": "BaseBdev4", 00:23:02.478 "uuid": "4207cd09-af58-58ef-b9c5-fdf2405c9c05", 00:23:02.478 "is_configured": true, 00:23:02.478 "data_offset": 0, 00:23:02.478 "data_size": 65536 00:23:02.478 } 00:23:02.478 ] 00:23:02.478 }' 00:23:02.478 10:38:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.478 10:38:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.044 10:38:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:03.044 10:38:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:03.302 [2024-07-25 10:38:06.938113] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:03.302 10:38:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:23:03.302 10:38:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.302 10:38:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:03.560 10:38:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:03.818 [2024-07-25 10:38:07.475322] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc7eba0 00:23:03.818 /dev/nbd0 00:23:03.818 10:38:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:03.818 10:38:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:03.818 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:03.818 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:03.818 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:03.818 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:03.818 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:03.819 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:03.819 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:03.819 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:03.819 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:03.819 1+0 records in 00:23:03.819 1+0 records out 00:23:03.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019975 s, 20.5 MB/s 00:23:03.819 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:03.819 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:03.819 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:04.076 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:04.076 10:38:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:04.076 10:38:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:04.076 10:38:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:04.076 10:38:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:23:04.076 10:38:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:23:04.076 10:38:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:23:12.182 65536+0 records in 00:23:12.182 65536+0 records out 00:23:12.182 33554432 bytes (34 MB, 32 MiB) copied, 7.37581 s, 4.5 MB/s 00:23:12.182 10:38:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:12.182 10:38:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:12.182 10:38:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:12.182 10:38:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:12.182 10:38:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:12.182 10:38:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.182 10:38:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:12.182 [2024-07-25 10:38:15.206970] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.182 [2024-07-25 10:38:15.403527] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.182 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.182 "name": "raid_bdev1", 00:23:12.182 "uuid": "2e47878f-27fb-4d1c-8a87-2605658adede", 00:23:12.182 "strip_size_kb": 0, 00:23:12.182 "state": "online", 00:23:12.182 "raid_level": "raid1", 00:23:12.182 "superblock": false, 00:23:12.182 "num_base_bdevs": 4, 00:23:12.182 "num_base_bdevs_discovered": 3, 00:23:12.182 "num_base_bdevs_operational": 3, 00:23:12.182 "base_bdevs_list": [ 00:23:12.182 { 00:23:12.183 "name": null, 00:23:12.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.183 "is_configured": false, 00:23:12.183 "data_offset": 0, 00:23:12.183 "data_size": 65536 00:23:12.183 }, 00:23:12.183 { 00:23:12.183 "name": "BaseBdev2", 00:23:12.183 "uuid": "7b9896aa-1fd3-59ad-bf6e-a4c797fc159c", 00:23:12.183 "is_configured": true, 00:23:12.183 "data_offset": 0, 00:23:12.183 "data_size": 65536 00:23:12.183 }, 00:23:12.183 { 00:23:12.183 "name": "BaseBdev3", 00:23:12.183 "uuid": "6bd62807-d717-521b-9720-951f56594793", 00:23:12.183 "is_configured": true, 00:23:12.183 "data_offset": 0, 00:23:12.183 "data_size": 65536 00:23:12.183 }, 00:23:12.183 { 00:23:12.183 "name": "BaseBdev4", 00:23:12.183 "uuid": "4207cd09-af58-58ef-b9c5-fdf2405c9c05", 00:23:12.183 "is_configured": true, 00:23:12.183 "data_offset": 0, 00:23:12.183 "data_size": 65536 00:23:12.183 } 00:23:12.183 ] 00:23:12.183 }' 00:23:12.183 10:38:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.183 10:38:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.748 10:38:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:12.749 [2024-07-25 10:38:16.454318] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:13.007 [2024-07-25 10:38:16.459672] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc7ed50 00:23:13.007 [2024-07-25 10:38:16.461916] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:13.007 10:38:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:13.939 10:38:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:13.939 10:38:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:13.939 10:38:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:13.939 10:38:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:13.939 10:38:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:13.939 10:38:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.939 10:38:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.197 10:38:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:14.197 "name": "raid_bdev1", 00:23:14.197 "uuid": "2e47878f-27fb-4d1c-8a87-2605658adede", 00:23:14.197 "strip_size_kb": 0, 00:23:14.197 "state": "online", 00:23:14.197 "raid_level": "raid1", 00:23:14.197 "superblock": false, 00:23:14.197 "num_base_bdevs": 4, 00:23:14.197 "num_base_bdevs_discovered": 4, 00:23:14.197 "num_base_bdevs_operational": 4, 00:23:14.197 "process": { 00:23:14.197 "type": "rebuild", 00:23:14.197 "target": "spare", 00:23:14.197 "progress": { 00:23:14.197 "blocks": 24576, 00:23:14.197 "percent": 37 00:23:14.197 } 00:23:14.197 }, 00:23:14.197 "base_bdevs_list": [ 00:23:14.197 { 00:23:14.197 "name": "spare", 00:23:14.197 "uuid": "861e8c41-5d99-5d4a-a752-24ad78c73088", 00:23:14.197 "is_configured": true, 00:23:14.197 "data_offset": 0, 00:23:14.197 "data_size": 65536 00:23:14.197 }, 00:23:14.197 { 00:23:14.197 "name": "BaseBdev2", 00:23:14.197 "uuid": "7b9896aa-1fd3-59ad-bf6e-a4c797fc159c", 00:23:14.197 "is_configured": true, 00:23:14.197 "data_offset": 0, 00:23:14.197 "data_size": 65536 00:23:14.197 }, 00:23:14.197 { 00:23:14.197 "name": "BaseBdev3", 00:23:14.197 "uuid": "6bd62807-d717-521b-9720-951f56594793", 00:23:14.197 "is_configured": true, 00:23:14.197 "data_offset": 0, 00:23:14.197 "data_size": 65536 00:23:14.197 }, 00:23:14.197 { 00:23:14.197 "name": "BaseBdev4", 00:23:14.197 "uuid": "4207cd09-af58-58ef-b9c5-fdf2405c9c05", 00:23:14.197 "is_configured": true, 00:23:14.197 "data_offset": 0, 00:23:14.197 "data_size": 65536 00:23:14.197 } 00:23:14.197 ] 00:23:14.197 }' 00:23:14.197 10:38:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:14.197 10:38:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:14.197 10:38:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:14.197 10:38:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:14.197 10:38:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:14.455 [2024-07-25 10:38:18.036168] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:14.455 [2024-07-25 10:38:18.075363] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:14.455 [2024-07-25 10:38:18.075415] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.455 [2024-07-25 10:38:18.075438] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:14.455 [2024-07-25 10:38:18.075449] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:14.455 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:14.455 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:14.455 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:14.455 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:14.455 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:14.455 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:14.455 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.455 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.455 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.455 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.455 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.455 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.713 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.713 "name": "raid_bdev1", 00:23:14.713 "uuid": "2e47878f-27fb-4d1c-8a87-2605658adede", 00:23:14.713 "strip_size_kb": 0, 00:23:14.713 "state": "online", 00:23:14.713 "raid_level": "raid1", 00:23:14.713 "superblock": false, 00:23:14.713 "num_base_bdevs": 4, 00:23:14.713 "num_base_bdevs_discovered": 3, 00:23:14.713 "num_base_bdevs_operational": 3, 00:23:14.713 "base_bdevs_list": [ 00:23:14.713 { 00:23:14.713 "name": null, 00:23:14.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.713 "is_configured": false, 00:23:14.713 "data_offset": 0, 00:23:14.713 "data_size": 65536 00:23:14.713 }, 00:23:14.713 { 00:23:14.713 "name": "BaseBdev2", 00:23:14.713 "uuid": "7b9896aa-1fd3-59ad-bf6e-a4c797fc159c", 00:23:14.713 "is_configured": true, 00:23:14.713 "data_offset": 0, 00:23:14.713 "data_size": 65536 00:23:14.713 }, 00:23:14.713 { 00:23:14.713 "name": "BaseBdev3", 00:23:14.713 "uuid": "6bd62807-d717-521b-9720-951f56594793", 00:23:14.713 "is_configured": true, 00:23:14.713 "data_offset": 0, 00:23:14.713 "data_size": 65536 00:23:14.713 }, 00:23:14.713 { 00:23:14.713 "name": "BaseBdev4", 00:23:14.713 "uuid": "4207cd09-af58-58ef-b9c5-fdf2405c9c05", 00:23:14.713 "is_configured": true, 00:23:14.713 "data_offset": 0, 00:23:14.713 "data_size": 65536 00:23:14.713 } 00:23:14.713 ] 00:23:14.713 }' 00:23:14.713 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.713 10:38:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.277 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:15.277 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:15.277 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:15.277 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:15.277 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:15.277 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.277 10:38:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.535 10:38:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:15.535 "name": "raid_bdev1", 00:23:15.535 "uuid": "2e47878f-27fb-4d1c-8a87-2605658adede", 00:23:15.535 "strip_size_kb": 0, 00:23:15.535 "state": "online", 00:23:15.535 "raid_level": "raid1", 00:23:15.535 "superblock": false, 00:23:15.535 "num_base_bdevs": 4, 00:23:15.535 "num_base_bdevs_discovered": 3, 00:23:15.535 "num_base_bdevs_operational": 3, 00:23:15.535 "base_bdevs_list": [ 00:23:15.535 { 00:23:15.535 "name": null, 00:23:15.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.535 "is_configured": false, 00:23:15.535 "data_offset": 0, 00:23:15.535 "data_size": 65536 00:23:15.535 }, 00:23:15.535 { 00:23:15.535 "name": "BaseBdev2", 00:23:15.535 "uuid": "7b9896aa-1fd3-59ad-bf6e-a4c797fc159c", 00:23:15.535 "is_configured": true, 00:23:15.535 "data_offset": 0, 00:23:15.535 "data_size": 65536 00:23:15.535 }, 00:23:15.535 { 00:23:15.535 "name": "BaseBdev3", 00:23:15.535 "uuid": "6bd62807-d717-521b-9720-951f56594793", 00:23:15.535 "is_configured": true, 00:23:15.535 "data_offset": 0, 00:23:15.535 "data_size": 65536 00:23:15.535 }, 00:23:15.535 { 00:23:15.535 "name": "BaseBdev4", 00:23:15.535 "uuid": "4207cd09-af58-58ef-b9c5-fdf2405c9c05", 00:23:15.535 "is_configured": true, 00:23:15.535 "data_offset": 0, 00:23:15.535 "data_size": 65536 00:23:15.535 } 00:23:15.535 ] 00:23:15.535 }' 00:23:15.535 10:38:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:15.535 10:38:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:15.535 10:38:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:15.535 10:38:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:15.535 10:38:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:15.793 [2024-07-25 10:38:19.460800] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:15.793 [2024-07-25 10:38:19.466092] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc7ed50 00:23:15.793 [2024-07-25 10:38:19.467696] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:15.793 10:38:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:17.168 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:17.168 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:17.168 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:17.168 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:17.168 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:17.169 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.169 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.169 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:17.169 "name": "raid_bdev1", 00:23:17.169 "uuid": "2e47878f-27fb-4d1c-8a87-2605658adede", 00:23:17.169 "strip_size_kb": 0, 00:23:17.169 "state": "online", 00:23:17.169 "raid_level": "raid1", 00:23:17.169 "superblock": false, 00:23:17.169 "num_base_bdevs": 4, 00:23:17.169 "num_base_bdevs_discovered": 4, 00:23:17.169 "num_base_bdevs_operational": 4, 00:23:17.169 "process": { 00:23:17.169 "type": "rebuild", 00:23:17.169 "target": "spare", 00:23:17.169 "progress": { 00:23:17.169 "blocks": 24576, 00:23:17.169 "percent": 37 00:23:17.169 } 00:23:17.169 }, 00:23:17.169 "base_bdevs_list": [ 00:23:17.169 { 00:23:17.169 "name": "spare", 00:23:17.169 "uuid": "861e8c41-5d99-5d4a-a752-24ad78c73088", 00:23:17.169 "is_configured": true, 00:23:17.169 "data_offset": 0, 00:23:17.169 "data_size": 65536 00:23:17.169 }, 00:23:17.169 { 00:23:17.169 "name": "BaseBdev2", 00:23:17.169 "uuid": "7b9896aa-1fd3-59ad-bf6e-a4c797fc159c", 00:23:17.169 "is_configured": true, 00:23:17.169 "data_offset": 0, 00:23:17.169 "data_size": 65536 00:23:17.169 }, 00:23:17.169 { 00:23:17.169 "name": "BaseBdev3", 00:23:17.169 "uuid": "6bd62807-d717-521b-9720-951f56594793", 00:23:17.169 "is_configured": true, 00:23:17.169 "data_offset": 0, 00:23:17.169 "data_size": 65536 00:23:17.169 }, 00:23:17.169 { 00:23:17.169 "name": "BaseBdev4", 00:23:17.169 "uuid": "4207cd09-af58-58ef-b9c5-fdf2405c9c05", 00:23:17.169 "is_configured": true, 00:23:17.169 "data_offset": 0, 00:23:17.169 "data_size": 65536 00:23:17.169 } 00:23:17.169 ] 00:23:17.169 }' 00:23:17.169 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:17.169 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:17.169 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:17.169 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:17.169 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:23:17.169 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:23:17.169 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:17.169 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:23:17.169 10:38:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:17.427 [2024-07-25 10:38:21.014601] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:17.427 [2024-07-25 10:38:21.081336] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xc7ed50 00:23:17.427 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:23:17.427 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:23:17.427 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:17.427 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:17.427 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:17.427 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:17.427 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:17.427 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.427 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.684 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:17.684 "name": "raid_bdev1", 00:23:17.684 "uuid": "2e47878f-27fb-4d1c-8a87-2605658adede", 00:23:17.684 "strip_size_kb": 0, 00:23:17.684 "state": "online", 00:23:17.684 "raid_level": "raid1", 00:23:17.684 "superblock": false, 00:23:17.684 "num_base_bdevs": 4, 00:23:17.684 "num_base_bdevs_discovered": 3, 00:23:17.684 "num_base_bdevs_operational": 3, 00:23:17.684 "process": { 00:23:17.684 "type": "rebuild", 00:23:17.684 "target": "spare", 00:23:17.684 "progress": { 00:23:17.684 "blocks": 36864, 00:23:17.684 "percent": 56 00:23:17.684 } 00:23:17.684 }, 00:23:17.684 "base_bdevs_list": [ 00:23:17.684 { 00:23:17.685 "name": "spare", 00:23:17.685 "uuid": "861e8c41-5d99-5d4a-a752-24ad78c73088", 00:23:17.685 "is_configured": true, 00:23:17.685 "data_offset": 0, 00:23:17.685 "data_size": 65536 00:23:17.685 }, 00:23:17.685 { 00:23:17.685 "name": null, 00:23:17.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.685 "is_configured": false, 00:23:17.685 "data_offset": 0, 00:23:17.685 "data_size": 65536 00:23:17.685 }, 00:23:17.685 { 00:23:17.685 "name": "BaseBdev3", 00:23:17.685 "uuid": "6bd62807-d717-521b-9720-951f56594793", 00:23:17.685 "is_configured": true, 00:23:17.685 "data_offset": 0, 00:23:17.685 "data_size": 65536 00:23:17.685 }, 00:23:17.685 { 00:23:17.685 "name": "BaseBdev4", 00:23:17.685 "uuid": "4207cd09-af58-58ef-b9c5-fdf2405c9c05", 00:23:17.685 "is_configured": true, 00:23:17.685 "data_offset": 0, 00:23:17.685 "data_size": 65536 00:23:17.685 } 00:23:17.685 ] 00:23:17.685 }' 00:23:17.685 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:17.942 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:17.942 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:17.942 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:17.942 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=865 00:23:17.942 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:17.942 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:17.942 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:17.942 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:17.942 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:17.942 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:17.942 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.942 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.200 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:18.200 "name": "raid_bdev1", 00:23:18.200 "uuid": "2e47878f-27fb-4d1c-8a87-2605658adede", 00:23:18.200 "strip_size_kb": 0, 00:23:18.200 "state": "online", 00:23:18.200 "raid_level": "raid1", 00:23:18.200 "superblock": false, 00:23:18.200 "num_base_bdevs": 4, 00:23:18.200 "num_base_bdevs_discovered": 3, 00:23:18.200 "num_base_bdevs_operational": 3, 00:23:18.200 "process": { 00:23:18.200 "type": "rebuild", 00:23:18.200 "target": "spare", 00:23:18.200 "progress": { 00:23:18.200 "blocks": 43008, 00:23:18.200 "percent": 65 00:23:18.200 } 00:23:18.200 }, 00:23:18.200 "base_bdevs_list": [ 00:23:18.200 { 00:23:18.200 "name": "spare", 00:23:18.200 "uuid": "861e8c41-5d99-5d4a-a752-24ad78c73088", 00:23:18.200 "is_configured": true, 00:23:18.200 "data_offset": 0, 00:23:18.200 "data_size": 65536 00:23:18.200 }, 00:23:18.200 { 00:23:18.200 "name": null, 00:23:18.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.200 "is_configured": false, 00:23:18.200 "data_offset": 0, 00:23:18.200 "data_size": 65536 00:23:18.200 }, 00:23:18.200 { 00:23:18.200 "name": "BaseBdev3", 00:23:18.200 "uuid": "6bd62807-d717-521b-9720-951f56594793", 00:23:18.200 "is_configured": true, 00:23:18.200 "data_offset": 0, 00:23:18.200 "data_size": 65536 00:23:18.200 }, 00:23:18.200 { 00:23:18.200 "name": "BaseBdev4", 00:23:18.200 "uuid": "4207cd09-af58-58ef-b9c5-fdf2405c9c05", 00:23:18.200 "is_configured": true, 00:23:18.200 "data_offset": 0, 00:23:18.200 "data_size": 65536 00:23:18.200 } 00:23:18.200 ] 00:23:18.200 }' 00:23:18.200 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:18.200 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.200 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:18.200 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.200 10:38:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:19.133 [2024-07-25 10:38:22.694980] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:19.133 [2024-07-25 10:38:22.695043] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:19.133 [2024-07-25 10:38:22.695091] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.133 10:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:19.133 10:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:19.133 10:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:19.133 10:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:19.133 10:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:19.133 10:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:19.133 10:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.133 10:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.390 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:19.390 "name": "raid_bdev1", 00:23:19.390 "uuid": "2e47878f-27fb-4d1c-8a87-2605658adede", 00:23:19.390 "strip_size_kb": 0, 00:23:19.390 "state": "online", 00:23:19.390 "raid_level": "raid1", 00:23:19.390 "superblock": false, 00:23:19.390 "num_base_bdevs": 4, 00:23:19.390 "num_base_bdevs_discovered": 3, 00:23:19.390 "num_base_bdevs_operational": 3, 00:23:19.390 "base_bdevs_list": [ 00:23:19.390 { 00:23:19.390 "name": "spare", 00:23:19.390 "uuid": "861e8c41-5d99-5d4a-a752-24ad78c73088", 00:23:19.390 "is_configured": true, 00:23:19.390 "data_offset": 0, 00:23:19.391 "data_size": 65536 00:23:19.391 }, 00:23:19.391 { 00:23:19.391 "name": null, 00:23:19.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.391 "is_configured": false, 00:23:19.391 "data_offset": 0, 00:23:19.391 "data_size": 65536 00:23:19.391 }, 00:23:19.391 { 00:23:19.391 "name": "BaseBdev3", 00:23:19.391 "uuid": "6bd62807-d717-521b-9720-951f56594793", 00:23:19.391 "is_configured": true, 00:23:19.391 "data_offset": 0, 00:23:19.391 "data_size": 65536 00:23:19.391 }, 00:23:19.391 { 00:23:19.391 "name": "BaseBdev4", 00:23:19.391 "uuid": "4207cd09-af58-58ef-b9c5-fdf2405c9c05", 00:23:19.391 "is_configured": true, 00:23:19.391 "data_offset": 0, 00:23:19.391 "data_size": 65536 00:23:19.391 } 00:23:19.391 ] 00:23:19.391 }' 00:23:19.391 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:19.391 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:19.391 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:19.648 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:19.648 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:23:19.648 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:19.648 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:19.648 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:19.648 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:19.648 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:19.648 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.649 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:19.914 "name": "raid_bdev1", 00:23:19.914 "uuid": "2e47878f-27fb-4d1c-8a87-2605658adede", 00:23:19.914 "strip_size_kb": 0, 00:23:19.914 "state": "online", 00:23:19.914 "raid_level": "raid1", 00:23:19.914 "superblock": false, 00:23:19.914 "num_base_bdevs": 4, 00:23:19.914 "num_base_bdevs_discovered": 3, 00:23:19.914 "num_base_bdevs_operational": 3, 00:23:19.914 "base_bdevs_list": [ 00:23:19.914 { 00:23:19.914 "name": "spare", 00:23:19.914 "uuid": "861e8c41-5d99-5d4a-a752-24ad78c73088", 00:23:19.914 "is_configured": true, 00:23:19.914 "data_offset": 0, 00:23:19.914 "data_size": 65536 00:23:19.914 }, 00:23:19.914 { 00:23:19.914 "name": null, 00:23:19.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.914 "is_configured": false, 00:23:19.914 "data_offset": 0, 00:23:19.914 "data_size": 65536 00:23:19.914 }, 00:23:19.914 { 00:23:19.914 "name": "BaseBdev3", 00:23:19.914 "uuid": "6bd62807-d717-521b-9720-951f56594793", 00:23:19.914 "is_configured": true, 00:23:19.914 "data_offset": 0, 00:23:19.914 "data_size": 65536 00:23:19.914 }, 00:23:19.914 { 00:23:19.914 "name": "BaseBdev4", 00:23:19.914 "uuid": "4207cd09-af58-58ef-b9c5-fdf2405c9c05", 00:23:19.914 "is_configured": true, 00:23:19.914 "data_offset": 0, 00:23:19.914 "data_size": 65536 00:23:19.914 } 00:23:19.914 ] 00:23:19.914 }' 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.914 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.181 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:20.181 "name": "raid_bdev1", 00:23:20.181 "uuid": "2e47878f-27fb-4d1c-8a87-2605658adede", 00:23:20.181 "strip_size_kb": 0, 00:23:20.181 "state": "online", 00:23:20.181 "raid_level": "raid1", 00:23:20.181 "superblock": false, 00:23:20.181 "num_base_bdevs": 4, 00:23:20.181 "num_base_bdevs_discovered": 3, 00:23:20.181 "num_base_bdevs_operational": 3, 00:23:20.181 "base_bdevs_list": [ 00:23:20.181 { 00:23:20.181 "name": "spare", 00:23:20.181 "uuid": "861e8c41-5d99-5d4a-a752-24ad78c73088", 00:23:20.181 "is_configured": true, 00:23:20.181 "data_offset": 0, 00:23:20.181 "data_size": 65536 00:23:20.181 }, 00:23:20.181 { 00:23:20.181 "name": null, 00:23:20.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.181 "is_configured": false, 00:23:20.181 "data_offset": 0, 00:23:20.181 "data_size": 65536 00:23:20.181 }, 00:23:20.181 { 00:23:20.181 "name": "BaseBdev3", 00:23:20.181 "uuid": "6bd62807-d717-521b-9720-951f56594793", 00:23:20.181 "is_configured": true, 00:23:20.181 "data_offset": 0, 00:23:20.181 "data_size": 65536 00:23:20.181 }, 00:23:20.181 { 00:23:20.181 "name": "BaseBdev4", 00:23:20.181 "uuid": "4207cd09-af58-58ef-b9c5-fdf2405c9c05", 00:23:20.181 "is_configured": true, 00:23:20.181 "data_offset": 0, 00:23:20.181 "data_size": 65536 00:23:20.181 } 00:23:20.181 ] 00:23:20.181 }' 00:23:20.181 10:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:20.181 10:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.746 10:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:21.005 [2024-07-25 10:38:24.497051] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:21.005 [2024-07-25 10:38:24.497080] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:21.005 [2024-07-25 10:38:24.497163] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:21.005 [2024-07-25 10:38:24.497252] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:21.005 [2024-07-25 10:38:24.497269] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc79a10 name raid_bdev1, state offline 00:23:21.005 10:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.005 10:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:23:21.262 10:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:21.262 10:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:23:21.262 10:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:21.262 10:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:21.262 10:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:21.262 10:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:21.262 10:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:21.262 10:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:21.262 10:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:21.262 10:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:21.262 10:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:21.262 10:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:21.262 10:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:21.520 /dev/nbd0 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:21.520 1+0 records in 00:23:21.520 1+0 records out 00:23:21.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177008 s, 23.1 MB/s 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:21.520 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:21.777 /dev/nbd1 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:21.777 1+0 records in 00:23:21.777 1+0 records out 00:23:21.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231923 s, 17.7 MB/s 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:21.777 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:22.062 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:22.062 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:22.062 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:22.062 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:22.062 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:22.062 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:22.062 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:22.062 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:22.062 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:22.062 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2444774 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 2444774 ']' 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 2444774 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2444774 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2444774' 00:23:22.321 killing process with pid 2444774 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 2444774 00:23:22.321 Received shutdown signal, test time was about 60.000000 seconds 00:23:22.321 00:23:22.321 Latency(us) 00:23:22.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.321 =================================================================================================================== 00:23:22.321 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:22.321 [2024-07-25 10:38:25.995252] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:22.321 10:38:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 2444774 00:23:22.578 [2024-07-25 10:38:26.061080] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:23:22.837 00:23:22.837 real 0m24.247s 00:23:22.837 user 0m32.703s 00:23:22.837 sys 0m5.084s 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.837 ************************************ 00:23:22.837 END TEST raid_rebuild_test 00:23:22.837 ************************************ 00:23:22.837 10:38:26 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:23:22.837 10:38:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:23:22.837 10:38:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:22.837 10:38:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:22.837 ************************************ 00:23:22.837 START TEST raid_rebuild_test_sb 00:23:22.837 ************************************ 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2447845 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2447845 /var/tmp/spdk-raid.sock 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2447845 ']' 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:22.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.837 10:38:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.837 [2024-07-25 10:38:26.454927] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:23:22.837 [2024-07-25 10:38:26.455007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2447845 ] 00:23:22.837 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:22.837 Zero copy mechanism will not be used. 00:23:23.095 [2024-07-25 10:38:26.545818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.095 [2024-07-25 10:38:26.668805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.095 [2024-07-25 10:38:26.735476] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:23.095 [2024-07-25 10:38:26.735521] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:23.095 10:38:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:23.095 10:38:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:23:23.095 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:23.095 10:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:23.661 BaseBdev1_malloc 00:23:23.661 10:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:23.661 [2024-07-25 10:38:27.325039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:23.661 [2024-07-25 10:38:27.325122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.661 [2024-07-25 10:38:27.325170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb09430 00:23:23.661 [2024-07-25 10:38:27.325185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.661 [2024-07-25 10:38:27.326947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.661 [2024-07-25 10:38:27.326970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:23.661 BaseBdev1 00:23:23.661 10:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:23.661 10:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:23.919 BaseBdev2_malloc 00:23:23.919 10:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:24.177 [2024-07-25 10:38:27.834372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:24.177 [2024-07-25 10:38:27.834448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.177 [2024-07-25 10:38:27.834496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcaca20 00:23:24.177 [2024-07-25 10:38:27.834518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.177 [2024-07-25 10:38:27.836162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.177 [2024-07-25 10:38:27.836187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:24.177 BaseBdev2 00:23:24.177 10:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:24.177 10:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:24.435 BaseBdev3_malloc 00:23:24.435 10:38:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:24.693 [2024-07-25 10:38:28.346363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:24.693 [2024-07-25 10:38:28.346435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.693 [2024-07-25 10:38:28.346473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xca3700 00:23:24.693 [2024-07-25 10:38:28.346487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.693 [2024-07-25 10:38:28.348012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.693 [2024-07-25 10:38:28.348035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:24.693 BaseBdev3 00:23:24.693 10:38:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:24.693 10:38:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:24.951 BaseBdev4_malloc 00:23:24.951 10:38:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:25.209 [2024-07-25 10:38:28.890266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:25.209 [2024-07-25 10:38:28.890334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.209 [2024-07-25 10:38:28.890363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xca1290 00:23:25.209 [2024-07-25 10:38:28.890379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.209 [2024-07-25 10:38:28.892113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.209 [2024-07-25 10:38:28.892141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:25.209 BaseBdev4 00:23:25.209 10:38:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:25.775 spare_malloc 00:23:25.775 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:25.775 spare_delay 00:23:25.775 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:26.032 [2024-07-25 10:38:29.696123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:26.032 [2024-07-25 10:38:29.696198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.032 [2024-07-25 10:38:29.696240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb02330 00:23:26.032 [2024-07-25 10:38:29.696255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.032 [2024-07-25 10:38:29.698085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.032 [2024-07-25 10:38:29.698121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:26.032 spare 00:23:26.033 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:26.290 [2024-07-25 10:38:29.940834] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:26.290 [2024-07-25 10:38:29.942059] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:26.290 [2024-07-25 10:38:29.942139] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:26.290 [2024-07-25 10:38:29.942206] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:26.290 [2024-07-25 10:38:29.942437] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xb03a10 00:23:26.290 [2024-07-25 10:38:29.942452] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:26.290 [2024-07-25 10:38:29.942668] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb203f0 00:23:26.290 [2024-07-25 10:38:29.942828] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb03a10 00:23:26.290 [2024-07-25 10:38:29.942842] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb03a10 00:23:26.291 [2024-07-25 10:38:29.942959] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.291 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:26.291 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:26.291 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:26.291 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:26.291 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:26.291 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:26.291 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:26.291 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:26.291 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:26.291 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:26.291 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.291 10:38:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.549 10:38:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:26.549 "name": "raid_bdev1", 00:23:26.549 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:26.549 "strip_size_kb": 0, 00:23:26.549 "state": "online", 00:23:26.549 "raid_level": "raid1", 00:23:26.549 "superblock": true, 00:23:26.549 "num_base_bdevs": 4, 00:23:26.549 "num_base_bdevs_discovered": 4, 00:23:26.549 "num_base_bdevs_operational": 4, 00:23:26.549 "base_bdevs_list": [ 00:23:26.549 { 00:23:26.549 "name": "BaseBdev1", 00:23:26.549 "uuid": "a56411d8-38b9-5c78-931f-7b1803f949e7", 00:23:26.549 "is_configured": true, 00:23:26.549 "data_offset": 2048, 00:23:26.549 "data_size": 63488 00:23:26.549 }, 00:23:26.549 { 00:23:26.549 "name": "BaseBdev2", 00:23:26.549 "uuid": "5cfb972a-cbf6-528a-bc02-5f9e38940c20", 00:23:26.549 "is_configured": true, 00:23:26.549 "data_offset": 2048, 00:23:26.549 "data_size": 63488 00:23:26.549 }, 00:23:26.549 { 00:23:26.549 "name": "BaseBdev3", 00:23:26.549 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:26.549 "is_configured": true, 00:23:26.549 "data_offset": 2048, 00:23:26.549 "data_size": 63488 00:23:26.549 }, 00:23:26.549 { 00:23:26.549 "name": "BaseBdev4", 00:23:26.549 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:26.549 "is_configured": true, 00:23:26.549 "data_offset": 2048, 00:23:26.549 "data_size": 63488 00:23:26.549 } 00:23:26.549 ] 00:23:26.549 }' 00:23:26.549 10:38:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:26.549 10:38:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.114 10:38:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:27.114 10:38:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:27.371 [2024-07-25 10:38:30.979920] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:27.371 10:38:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:23:27.371 10:38:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.371 10:38:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:27.629 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:27.886 [2024-07-25 10:38:31.480987] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb0ad50 00:23:27.886 /dev/nbd0 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:27.886 1+0 records in 00:23:27.886 1+0 records out 00:23:27.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019957 s, 20.5 MB/s 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:23:27.886 10:38:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:35.990 63488+0 records in 00:23:35.990 63488+0 records out 00:23:35.990 32505856 bytes (33 MB, 31 MiB) copied, 8.06812 s, 4.0 MB/s 00:23:35.990 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:35.990 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:35.990 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:35.990 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:35.990 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:35.990 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:35.990 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:36.247 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:36.247 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:36.247 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:36.247 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:36.247 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:36.247 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:36.247 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:36.247 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:36.247 10:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:36.248 [2024-07-25 10:38:39.934863] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.504 [2024-07-25 10:38:40.151022] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:36.504 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:36.504 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:36.504 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:36.504 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:36.504 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:36.504 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:36.505 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.505 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.505 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.505 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.505 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.505 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.762 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.762 "name": "raid_bdev1", 00:23:36.762 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:36.762 "strip_size_kb": 0, 00:23:36.762 "state": "online", 00:23:36.762 "raid_level": "raid1", 00:23:36.762 "superblock": true, 00:23:36.762 "num_base_bdevs": 4, 00:23:36.762 "num_base_bdevs_discovered": 3, 00:23:36.762 "num_base_bdevs_operational": 3, 00:23:36.762 "base_bdevs_list": [ 00:23:36.762 { 00:23:36.762 "name": null, 00:23:36.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.762 "is_configured": false, 00:23:36.762 "data_offset": 2048, 00:23:36.762 "data_size": 63488 00:23:36.762 }, 00:23:36.762 { 00:23:36.762 "name": "BaseBdev2", 00:23:36.762 "uuid": "5cfb972a-cbf6-528a-bc02-5f9e38940c20", 00:23:36.762 "is_configured": true, 00:23:36.762 "data_offset": 2048, 00:23:36.762 "data_size": 63488 00:23:36.762 }, 00:23:36.762 { 00:23:36.762 "name": "BaseBdev3", 00:23:36.762 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:36.762 "is_configured": true, 00:23:36.762 "data_offset": 2048, 00:23:36.762 "data_size": 63488 00:23:36.762 }, 00:23:36.762 { 00:23:36.762 "name": "BaseBdev4", 00:23:36.762 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:36.762 "is_configured": true, 00:23:36.762 "data_offset": 2048, 00:23:36.762 "data_size": 63488 00:23:36.762 } 00:23:36.762 ] 00:23:36.762 }' 00:23:36.762 10:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.762 10:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.327 10:38:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:37.585 [2024-07-25 10:38:41.245912] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:37.585 [2024-07-25 10:38:41.251286] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb0a940 00:23:37.585 [2024-07-25 10:38:41.253486] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:37.585 10:38:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:38.958 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.958 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:38.958 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:38.958 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:38.958 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:38.958 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.958 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.958 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:38.958 "name": "raid_bdev1", 00:23:38.958 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:38.958 "strip_size_kb": 0, 00:23:38.958 "state": "online", 00:23:38.958 "raid_level": "raid1", 00:23:38.958 "superblock": true, 00:23:38.958 "num_base_bdevs": 4, 00:23:38.958 "num_base_bdevs_discovered": 4, 00:23:38.958 "num_base_bdevs_operational": 4, 00:23:38.958 "process": { 00:23:38.958 "type": "rebuild", 00:23:38.958 "target": "spare", 00:23:38.958 "progress": { 00:23:38.958 "blocks": 24576, 00:23:38.958 "percent": 38 00:23:38.958 } 00:23:38.958 }, 00:23:38.958 "base_bdevs_list": [ 00:23:38.958 { 00:23:38.958 "name": "spare", 00:23:38.958 "uuid": "4dd856be-4041-590c-bf0c-390c5adaa45f", 00:23:38.958 "is_configured": true, 00:23:38.958 "data_offset": 2048, 00:23:38.958 "data_size": 63488 00:23:38.958 }, 00:23:38.958 { 00:23:38.958 "name": "BaseBdev2", 00:23:38.958 "uuid": "5cfb972a-cbf6-528a-bc02-5f9e38940c20", 00:23:38.958 "is_configured": true, 00:23:38.958 "data_offset": 2048, 00:23:38.958 "data_size": 63488 00:23:38.958 }, 00:23:38.958 { 00:23:38.958 "name": "BaseBdev3", 00:23:38.958 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:38.958 "is_configured": true, 00:23:38.958 "data_offset": 2048, 00:23:38.958 "data_size": 63488 00:23:38.958 }, 00:23:38.958 { 00:23:38.958 "name": "BaseBdev4", 00:23:38.959 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:38.959 "is_configured": true, 00:23:38.959 "data_offset": 2048, 00:23:38.959 "data_size": 63488 00:23:38.959 } 00:23:38.959 ] 00:23:38.959 }' 00:23:38.959 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:38.959 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:38.959 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:38.959 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:38.959 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:39.216 [2024-07-25 10:38:42.875942] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:39.474 [2024-07-25 10:38:42.968027] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:39.474 [2024-07-25 10:38:42.968092] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.474 [2024-07-25 10:38:42.968132] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:39.474 [2024-07-25 10:38:42.968144] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:39.474 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:39.474 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:39.474 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:39.474 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:39.474 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:39.474 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:39.474 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:39.474 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:39.474 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:39.474 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:39.474 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.474 10:38:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.732 10:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:39.732 "name": "raid_bdev1", 00:23:39.732 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:39.732 "strip_size_kb": 0, 00:23:39.732 "state": "online", 00:23:39.732 "raid_level": "raid1", 00:23:39.732 "superblock": true, 00:23:39.732 "num_base_bdevs": 4, 00:23:39.732 "num_base_bdevs_discovered": 3, 00:23:39.732 "num_base_bdevs_operational": 3, 00:23:39.732 "base_bdevs_list": [ 00:23:39.732 { 00:23:39.732 "name": null, 00:23:39.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.732 "is_configured": false, 00:23:39.732 "data_offset": 2048, 00:23:39.732 "data_size": 63488 00:23:39.732 }, 00:23:39.732 { 00:23:39.732 "name": "BaseBdev2", 00:23:39.732 "uuid": "5cfb972a-cbf6-528a-bc02-5f9e38940c20", 00:23:39.732 "is_configured": true, 00:23:39.732 "data_offset": 2048, 00:23:39.732 "data_size": 63488 00:23:39.732 }, 00:23:39.732 { 00:23:39.732 "name": "BaseBdev3", 00:23:39.732 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:39.732 "is_configured": true, 00:23:39.732 "data_offset": 2048, 00:23:39.732 "data_size": 63488 00:23:39.732 }, 00:23:39.732 { 00:23:39.732 "name": "BaseBdev4", 00:23:39.732 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:39.732 "is_configured": true, 00:23:39.732 "data_offset": 2048, 00:23:39.732 "data_size": 63488 00:23:39.732 } 00:23:39.732 ] 00:23:39.732 }' 00:23:39.732 10:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:39.732 10:38:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:40.295 10:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:40.295 10:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:40.295 10:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:40.295 10:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:40.295 10:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:40.295 10:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.295 10:38:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.552 10:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:40.552 "name": "raid_bdev1", 00:23:40.552 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:40.552 "strip_size_kb": 0, 00:23:40.552 "state": "online", 00:23:40.552 "raid_level": "raid1", 00:23:40.552 "superblock": true, 00:23:40.552 "num_base_bdevs": 4, 00:23:40.552 "num_base_bdevs_discovered": 3, 00:23:40.552 "num_base_bdevs_operational": 3, 00:23:40.552 "base_bdevs_list": [ 00:23:40.552 { 00:23:40.552 "name": null, 00:23:40.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.552 "is_configured": false, 00:23:40.552 "data_offset": 2048, 00:23:40.552 "data_size": 63488 00:23:40.553 }, 00:23:40.553 { 00:23:40.553 "name": "BaseBdev2", 00:23:40.553 "uuid": "5cfb972a-cbf6-528a-bc02-5f9e38940c20", 00:23:40.553 "is_configured": true, 00:23:40.553 "data_offset": 2048, 00:23:40.553 "data_size": 63488 00:23:40.553 }, 00:23:40.553 { 00:23:40.553 "name": "BaseBdev3", 00:23:40.553 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:40.553 "is_configured": true, 00:23:40.553 "data_offset": 2048, 00:23:40.553 "data_size": 63488 00:23:40.553 }, 00:23:40.553 { 00:23:40.553 "name": "BaseBdev4", 00:23:40.553 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:40.553 "is_configured": true, 00:23:40.553 "data_offset": 2048, 00:23:40.553 "data_size": 63488 00:23:40.553 } 00:23:40.553 ] 00:23:40.553 }' 00:23:40.553 10:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:40.553 10:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:40.553 10:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:40.553 10:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:40.553 10:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:40.810 [2024-07-25 10:38:44.349626] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:40.810 [2024-07-25 10:38:44.355077] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb0a260 00:23:40.810 [2024-07-25 10:38:44.356491] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:40.810 10:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:41.743 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.743 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:41.743 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:41.743 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:41.743 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:41.743 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.743 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.001 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:42.001 "name": "raid_bdev1", 00:23:42.001 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:42.001 "strip_size_kb": 0, 00:23:42.001 "state": "online", 00:23:42.001 "raid_level": "raid1", 00:23:42.001 "superblock": true, 00:23:42.001 "num_base_bdevs": 4, 00:23:42.001 "num_base_bdevs_discovered": 4, 00:23:42.001 "num_base_bdevs_operational": 4, 00:23:42.001 "process": { 00:23:42.001 "type": "rebuild", 00:23:42.001 "target": "spare", 00:23:42.001 "progress": { 00:23:42.001 "blocks": 24576, 00:23:42.001 "percent": 38 00:23:42.001 } 00:23:42.001 }, 00:23:42.001 "base_bdevs_list": [ 00:23:42.001 { 00:23:42.001 "name": "spare", 00:23:42.001 "uuid": "4dd856be-4041-590c-bf0c-390c5adaa45f", 00:23:42.001 "is_configured": true, 00:23:42.001 "data_offset": 2048, 00:23:42.001 "data_size": 63488 00:23:42.001 }, 00:23:42.001 { 00:23:42.001 "name": "BaseBdev2", 00:23:42.001 "uuid": "5cfb972a-cbf6-528a-bc02-5f9e38940c20", 00:23:42.001 "is_configured": true, 00:23:42.001 "data_offset": 2048, 00:23:42.001 "data_size": 63488 00:23:42.001 }, 00:23:42.001 { 00:23:42.001 "name": "BaseBdev3", 00:23:42.001 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:42.001 "is_configured": true, 00:23:42.001 "data_offset": 2048, 00:23:42.001 "data_size": 63488 00:23:42.001 }, 00:23:42.001 { 00:23:42.001 "name": "BaseBdev4", 00:23:42.001 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:42.001 "is_configured": true, 00:23:42.001 "data_offset": 2048, 00:23:42.001 "data_size": 63488 00:23:42.001 } 00:23:42.001 ] 00:23:42.001 }' 00:23:42.001 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:42.001 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.001 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:42.001 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.001 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:23:42.001 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:23:42.001 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:23:42.001 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:23:42.001 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:42.001 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:23:42.001 10:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:42.566 [2024-07-25 10:38:45.975533] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:42.566 [2024-07-25 10:38:46.171124] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xb0a260 00:23:42.566 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:23:42.566 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:23:42.566 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.566 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:42.566 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:42.566 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:42.566 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:42.566 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.566 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:42.824 "name": "raid_bdev1", 00:23:42.824 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:42.824 "strip_size_kb": 0, 00:23:42.824 "state": "online", 00:23:42.824 "raid_level": "raid1", 00:23:42.824 "superblock": true, 00:23:42.824 "num_base_bdevs": 4, 00:23:42.824 "num_base_bdevs_discovered": 3, 00:23:42.824 "num_base_bdevs_operational": 3, 00:23:42.824 "process": { 00:23:42.824 "type": "rebuild", 00:23:42.824 "target": "spare", 00:23:42.824 "progress": { 00:23:42.824 "blocks": 38912, 00:23:42.824 "percent": 61 00:23:42.824 } 00:23:42.824 }, 00:23:42.824 "base_bdevs_list": [ 00:23:42.824 { 00:23:42.824 "name": "spare", 00:23:42.824 "uuid": "4dd856be-4041-590c-bf0c-390c5adaa45f", 00:23:42.824 "is_configured": true, 00:23:42.824 "data_offset": 2048, 00:23:42.824 "data_size": 63488 00:23:42.824 }, 00:23:42.824 { 00:23:42.824 "name": null, 00:23:42.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.824 "is_configured": false, 00:23:42.824 "data_offset": 2048, 00:23:42.824 "data_size": 63488 00:23:42.824 }, 00:23:42.824 { 00:23:42.824 "name": "BaseBdev3", 00:23:42.824 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:42.824 "is_configured": true, 00:23:42.824 "data_offset": 2048, 00:23:42.824 "data_size": 63488 00:23:42.824 }, 00:23:42.824 { 00:23:42.824 "name": "BaseBdev4", 00:23:42.824 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:42.824 "is_configured": true, 00:23:42.824 "data_offset": 2048, 00:23:42.824 "data_size": 63488 00:23:42.824 } 00:23:42.824 ] 00:23:42.824 }' 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=890 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.824 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.081 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:43.081 "name": "raid_bdev1", 00:23:43.081 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:43.081 "strip_size_kb": 0, 00:23:43.081 "state": "online", 00:23:43.081 "raid_level": "raid1", 00:23:43.081 "superblock": true, 00:23:43.081 "num_base_bdevs": 4, 00:23:43.081 "num_base_bdevs_discovered": 3, 00:23:43.081 "num_base_bdevs_operational": 3, 00:23:43.081 "process": { 00:23:43.081 "type": "rebuild", 00:23:43.081 "target": "spare", 00:23:43.081 "progress": { 00:23:43.081 "blocks": 45056, 00:23:43.081 "percent": 70 00:23:43.081 } 00:23:43.081 }, 00:23:43.081 "base_bdevs_list": [ 00:23:43.081 { 00:23:43.081 "name": "spare", 00:23:43.081 "uuid": "4dd856be-4041-590c-bf0c-390c5adaa45f", 00:23:43.081 "is_configured": true, 00:23:43.081 "data_offset": 2048, 00:23:43.081 "data_size": 63488 00:23:43.081 }, 00:23:43.081 { 00:23:43.081 "name": null, 00:23:43.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.081 "is_configured": false, 00:23:43.081 "data_offset": 2048, 00:23:43.081 "data_size": 63488 00:23:43.081 }, 00:23:43.081 { 00:23:43.081 "name": "BaseBdev3", 00:23:43.081 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:43.081 "is_configured": true, 00:23:43.081 "data_offset": 2048, 00:23:43.081 "data_size": 63488 00:23:43.081 }, 00:23:43.081 { 00:23:43.081 "name": "BaseBdev4", 00:23:43.081 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:43.081 "is_configured": true, 00:23:43.081 "data_offset": 2048, 00:23:43.081 "data_size": 63488 00:23:43.081 } 00:23:43.081 ] 00:23:43.081 }' 00:23:43.081 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:43.338 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.338 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:43.338 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.338 10:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:43.950 [2024-07-25 10:38:47.582975] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:43.950 [2024-07-25 10:38:47.583047] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:43.950 [2024-07-25 10:38:47.583178] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.208 10:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:44.208 10:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.208 10:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:44.208 10:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:44.208 10:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:44.208 10:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:44.208 10:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.208 10:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.465 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:44.465 "name": "raid_bdev1", 00:23:44.465 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:44.465 "strip_size_kb": 0, 00:23:44.465 "state": "online", 00:23:44.465 "raid_level": "raid1", 00:23:44.465 "superblock": true, 00:23:44.465 "num_base_bdevs": 4, 00:23:44.465 "num_base_bdevs_discovered": 3, 00:23:44.465 "num_base_bdevs_operational": 3, 00:23:44.465 "base_bdevs_list": [ 00:23:44.465 { 00:23:44.465 "name": "spare", 00:23:44.465 "uuid": "4dd856be-4041-590c-bf0c-390c5adaa45f", 00:23:44.465 "is_configured": true, 00:23:44.465 "data_offset": 2048, 00:23:44.465 "data_size": 63488 00:23:44.465 }, 00:23:44.465 { 00:23:44.465 "name": null, 00:23:44.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.465 "is_configured": false, 00:23:44.465 "data_offset": 2048, 00:23:44.465 "data_size": 63488 00:23:44.465 }, 00:23:44.465 { 00:23:44.465 "name": "BaseBdev3", 00:23:44.465 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:44.465 "is_configured": true, 00:23:44.465 "data_offset": 2048, 00:23:44.465 "data_size": 63488 00:23:44.465 }, 00:23:44.465 { 00:23:44.465 "name": "BaseBdev4", 00:23:44.465 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:44.465 "is_configured": true, 00:23:44.465 "data_offset": 2048, 00:23:44.465 "data_size": 63488 00:23:44.465 } 00:23:44.465 ] 00:23:44.465 }' 00:23:44.465 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:44.465 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:44.465 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:44.465 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:44.465 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:23:44.465 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:44.465 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:44.465 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:44.465 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:44.465 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:44.465 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.465 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.722 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:44.722 "name": "raid_bdev1", 00:23:44.722 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:44.722 "strip_size_kb": 0, 00:23:44.722 "state": "online", 00:23:44.722 "raid_level": "raid1", 00:23:44.722 "superblock": true, 00:23:44.722 "num_base_bdevs": 4, 00:23:44.722 "num_base_bdevs_discovered": 3, 00:23:44.722 "num_base_bdevs_operational": 3, 00:23:44.722 "base_bdevs_list": [ 00:23:44.722 { 00:23:44.722 "name": "spare", 00:23:44.722 "uuid": "4dd856be-4041-590c-bf0c-390c5adaa45f", 00:23:44.722 "is_configured": true, 00:23:44.722 "data_offset": 2048, 00:23:44.722 "data_size": 63488 00:23:44.722 }, 00:23:44.722 { 00:23:44.722 "name": null, 00:23:44.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.722 "is_configured": false, 00:23:44.722 "data_offset": 2048, 00:23:44.722 "data_size": 63488 00:23:44.722 }, 00:23:44.722 { 00:23:44.722 "name": "BaseBdev3", 00:23:44.722 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:44.722 "is_configured": true, 00:23:44.722 "data_offset": 2048, 00:23:44.722 "data_size": 63488 00:23:44.722 }, 00:23:44.722 { 00:23:44.722 "name": "BaseBdev4", 00:23:44.722 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:44.722 "is_configured": true, 00:23:44.722 "data_offset": 2048, 00:23:44.722 "data_size": 63488 00:23:44.722 } 00:23:44.722 ] 00:23:44.722 }' 00:23:44.722 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.980 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.241 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.241 "name": "raid_bdev1", 00:23:45.241 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:45.241 "strip_size_kb": 0, 00:23:45.241 "state": "online", 00:23:45.241 "raid_level": "raid1", 00:23:45.241 "superblock": true, 00:23:45.241 "num_base_bdevs": 4, 00:23:45.241 "num_base_bdevs_discovered": 3, 00:23:45.241 "num_base_bdevs_operational": 3, 00:23:45.241 "base_bdevs_list": [ 00:23:45.241 { 00:23:45.241 "name": "spare", 00:23:45.241 "uuid": "4dd856be-4041-590c-bf0c-390c5adaa45f", 00:23:45.241 "is_configured": true, 00:23:45.241 "data_offset": 2048, 00:23:45.241 "data_size": 63488 00:23:45.241 }, 00:23:45.241 { 00:23:45.241 "name": null, 00:23:45.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.241 "is_configured": false, 00:23:45.241 "data_offset": 2048, 00:23:45.241 "data_size": 63488 00:23:45.241 }, 00:23:45.241 { 00:23:45.241 "name": "BaseBdev3", 00:23:45.241 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:45.241 "is_configured": true, 00:23:45.241 "data_offset": 2048, 00:23:45.241 "data_size": 63488 00:23:45.241 }, 00:23:45.241 { 00:23:45.241 "name": "BaseBdev4", 00:23:45.241 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:45.241 "is_configured": true, 00:23:45.241 "data_offset": 2048, 00:23:45.241 "data_size": 63488 00:23:45.241 } 00:23:45.241 ] 00:23:45.241 }' 00:23:45.241 10:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.241 10:38:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.848 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:46.107 [2024-07-25 10:38:49.565528] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:46.107 [2024-07-25 10:38:49.565564] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:46.107 [2024-07-25 10:38:49.565640] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.107 [2024-07-25 10:38:49.565727] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.107 [2024-07-25 10:38:49.565743] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb03a10 name raid_bdev1, state offline 00:23:46.107 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.107 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:23:46.366 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:46.366 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:23:46.366 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:46.366 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:46.366 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:46.366 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:46.366 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:46.366 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:46.366 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:46.366 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:46.366 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:46.366 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:46.366 10:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:46.625 /dev/nbd0 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:46.625 1+0 records in 00:23:46.625 1+0 records out 00:23:46.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184466 s, 22.2 MB/s 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:46.625 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:46.883 /dev/nbd1 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:46.883 1+0 records in 00:23:46.883 1+0 records out 00:23:46.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295474 s, 13.9 MB/s 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:46.883 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:47.141 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:47.141 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:47.141 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:47.141 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:47.141 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:47.141 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:47.141 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:47.141 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:47.141 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:47.141 10:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:47.398 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:47.398 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:47.398 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:47.398 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:47.398 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:47.398 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:47.398 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:47.398 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:47.398 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:23:47.398 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:47.655 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:47.914 [2024-07-25 10:38:51.501869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:47.914 [2024-07-25 10:38:51.501919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:47.914 [2024-07-25 10:38:51.501945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0a060 00:23:47.914 [2024-07-25 10:38:51.501960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:47.914 [2024-07-25 10:38:51.503728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:47.914 [2024-07-25 10:38:51.503756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:47.914 [2024-07-25 10:38:51.503844] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:47.914 [2024-07-25 10:38:51.503882] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:47.914 [2024-07-25 10:38:51.504008] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:47.914 [2024-07-25 10:38:51.504180] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:47.914 spare 00:23:47.914 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:47.914 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:47.914 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:47.914 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:47.914 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:47.914 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:47.914 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.914 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.914 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.914 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.914 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.914 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.914 [2024-07-25 10:38:51.604528] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xb006d0 00:23:47.914 [2024-07-25 10:38:51.604549] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:47.914 [2024-07-25 10:38:51.604736] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb039e0 00:23:47.914 [2024-07-25 10:38:51.604904] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb006d0 00:23:47.914 [2024-07-25 10:38:51.604921] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb006d0 00:23:47.914 [2024-07-25 10:38:51.605034] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.172 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:48.172 "name": "raid_bdev1", 00:23:48.172 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:48.172 "strip_size_kb": 0, 00:23:48.172 "state": "online", 00:23:48.172 "raid_level": "raid1", 00:23:48.172 "superblock": true, 00:23:48.172 "num_base_bdevs": 4, 00:23:48.172 "num_base_bdevs_discovered": 3, 00:23:48.172 "num_base_bdevs_operational": 3, 00:23:48.172 "base_bdevs_list": [ 00:23:48.172 { 00:23:48.172 "name": "spare", 00:23:48.172 "uuid": "4dd856be-4041-590c-bf0c-390c5adaa45f", 00:23:48.172 "is_configured": true, 00:23:48.172 "data_offset": 2048, 00:23:48.172 "data_size": 63488 00:23:48.172 }, 00:23:48.172 { 00:23:48.172 "name": null, 00:23:48.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.172 "is_configured": false, 00:23:48.172 "data_offset": 2048, 00:23:48.172 "data_size": 63488 00:23:48.172 }, 00:23:48.172 { 00:23:48.172 "name": "BaseBdev3", 00:23:48.172 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:48.172 "is_configured": true, 00:23:48.172 "data_offset": 2048, 00:23:48.172 "data_size": 63488 00:23:48.172 }, 00:23:48.172 { 00:23:48.172 "name": "BaseBdev4", 00:23:48.172 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:48.172 "is_configured": true, 00:23:48.172 "data_offset": 2048, 00:23:48.172 "data_size": 63488 00:23:48.172 } 00:23:48.172 ] 00:23:48.172 }' 00:23:48.172 10:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:48.172 10:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.737 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:48.737 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:48.737 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:48.737 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:48.737 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:48.737 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.737 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.995 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:48.995 "name": "raid_bdev1", 00:23:48.995 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:48.995 "strip_size_kb": 0, 00:23:48.995 "state": "online", 00:23:48.995 "raid_level": "raid1", 00:23:48.995 "superblock": true, 00:23:48.995 "num_base_bdevs": 4, 00:23:48.995 "num_base_bdevs_discovered": 3, 00:23:48.995 "num_base_bdevs_operational": 3, 00:23:48.995 "base_bdevs_list": [ 00:23:48.995 { 00:23:48.995 "name": "spare", 00:23:48.995 "uuid": "4dd856be-4041-590c-bf0c-390c5adaa45f", 00:23:48.995 "is_configured": true, 00:23:48.995 "data_offset": 2048, 00:23:48.995 "data_size": 63488 00:23:48.995 }, 00:23:48.995 { 00:23:48.995 "name": null, 00:23:48.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.995 "is_configured": false, 00:23:48.995 "data_offset": 2048, 00:23:48.995 "data_size": 63488 00:23:48.995 }, 00:23:48.995 { 00:23:48.995 "name": "BaseBdev3", 00:23:48.995 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:48.995 "is_configured": true, 00:23:48.995 "data_offset": 2048, 00:23:48.995 "data_size": 63488 00:23:48.995 }, 00:23:48.995 { 00:23:48.995 "name": "BaseBdev4", 00:23:48.995 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:48.995 "is_configured": true, 00:23:48.995 "data_offset": 2048, 00:23:48.995 "data_size": 63488 00:23:48.995 } 00:23:48.995 ] 00:23:48.995 }' 00:23:48.995 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:48.995 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:48.995 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:48.995 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:48.995 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.995 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:49.253 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.253 10:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:49.510 [2024-07-25 10:38:53.134320] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:49.510 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:49.510 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:49.510 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:49.510 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:49.510 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:49.510 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:49.510 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:49.510 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:49.510 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:49.510 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:49.510 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.510 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.770 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:49.770 "name": "raid_bdev1", 00:23:49.770 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:49.770 "strip_size_kb": 0, 00:23:49.770 "state": "online", 00:23:49.770 "raid_level": "raid1", 00:23:49.770 "superblock": true, 00:23:49.770 "num_base_bdevs": 4, 00:23:49.770 "num_base_bdevs_discovered": 2, 00:23:49.770 "num_base_bdevs_operational": 2, 00:23:49.770 "base_bdevs_list": [ 00:23:49.770 { 00:23:49.770 "name": null, 00:23:49.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.770 "is_configured": false, 00:23:49.770 "data_offset": 2048, 00:23:49.770 "data_size": 63488 00:23:49.770 }, 00:23:49.770 { 00:23:49.770 "name": null, 00:23:49.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.770 "is_configured": false, 00:23:49.770 "data_offset": 2048, 00:23:49.770 "data_size": 63488 00:23:49.770 }, 00:23:49.770 { 00:23:49.770 "name": "BaseBdev3", 00:23:49.770 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:49.770 "is_configured": true, 00:23:49.770 "data_offset": 2048, 00:23:49.770 "data_size": 63488 00:23:49.770 }, 00:23:49.770 { 00:23:49.770 "name": "BaseBdev4", 00:23:49.770 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:49.770 "is_configured": true, 00:23:49.770 "data_offset": 2048, 00:23:49.770 "data_size": 63488 00:23:49.770 } 00:23:49.770 ] 00:23:49.770 }' 00:23:49.770 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:49.770 10:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.334 10:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:50.592 [2024-07-25 10:38:54.185197] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:50.592 [2024-07-25 10:38:54.185395] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:23:50.592 [2024-07-25 10:38:54.185417] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:50.592 [2024-07-25 10:38:54.185447] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:50.592 [2024-07-25 10:38:54.190492] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb039e0 00:23:50.592 [2024-07-25 10:38:54.192714] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:50.592 10:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:23:51.524 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.524 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:51.524 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:51.524 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:51.524 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:51.524 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.524 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.781 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:51.781 "name": "raid_bdev1", 00:23:51.781 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:51.781 "strip_size_kb": 0, 00:23:51.781 "state": "online", 00:23:51.781 "raid_level": "raid1", 00:23:51.781 "superblock": true, 00:23:51.781 "num_base_bdevs": 4, 00:23:51.782 "num_base_bdevs_discovered": 3, 00:23:51.782 "num_base_bdevs_operational": 3, 00:23:51.782 "process": { 00:23:51.782 "type": "rebuild", 00:23:51.782 "target": "spare", 00:23:51.782 "progress": { 00:23:51.782 "blocks": 24576, 00:23:51.782 "percent": 38 00:23:51.782 } 00:23:51.782 }, 00:23:51.782 "base_bdevs_list": [ 00:23:51.782 { 00:23:51.782 "name": "spare", 00:23:51.782 "uuid": "4dd856be-4041-590c-bf0c-390c5adaa45f", 00:23:51.782 "is_configured": true, 00:23:51.782 "data_offset": 2048, 00:23:51.782 "data_size": 63488 00:23:51.782 }, 00:23:51.782 { 00:23:51.782 "name": null, 00:23:51.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.782 "is_configured": false, 00:23:51.782 "data_offset": 2048, 00:23:51.782 "data_size": 63488 00:23:51.782 }, 00:23:51.782 { 00:23:51.782 "name": "BaseBdev3", 00:23:51.782 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:51.782 "is_configured": true, 00:23:51.782 "data_offset": 2048, 00:23:51.782 "data_size": 63488 00:23:51.782 }, 00:23:51.782 { 00:23:51.782 "name": "BaseBdev4", 00:23:51.782 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:51.782 "is_configured": true, 00:23:51.782 "data_offset": 2048, 00:23:51.782 "data_size": 63488 00:23:51.782 } 00:23:51.782 ] 00:23:51.782 }' 00:23:51.782 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:52.039 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:52.039 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:52.039 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:52.039 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:52.297 [2024-07-25 10:38:55.766716] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:52.297 [2024-07-25 10:38:55.805917] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:52.297 [2024-07-25 10:38:55.805972] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.297 [2024-07-25 10:38:55.805995] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:52.297 [2024-07-25 10:38:55.806005] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:52.297 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:52.297 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:52.297 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:52.297 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:52.297 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:52.297 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:52.297 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.297 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.297 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.297 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.297 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.297 10:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.555 10:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.555 "name": "raid_bdev1", 00:23:52.555 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:52.555 "strip_size_kb": 0, 00:23:52.555 "state": "online", 00:23:52.555 "raid_level": "raid1", 00:23:52.555 "superblock": true, 00:23:52.555 "num_base_bdevs": 4, 00:23:52.555 "num_base_bdevs_discovered": 2, 00:23:52.555 "num_base_bdevs_operational": 2, 00:23:52.555 "base_bdevs_list": [ 00:23:52.555 { 00:23:52.555 "name": null, 00:23:52.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.555 "is_configured": false, 00:23:52.555 "data_offset": 2048, 00:23:52.555 "data_size": 63488 00:23:52.555 }, 00:23:52.555 { 00:23:52.555 "name": null, 00:23:52.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.555 "is_configured": false, 00:23:52.555 "data_offset": 2048, 00:23:52.555 "data_size": 63488 00:23:52.555 }, 00:23:52.555 { 00:23:52.555 "name": "BaseBdev3", 00:23:52.555 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:52.555 "is_configured": true, 00:23:52.555 "data_offset": 2048, 00:23:52.555 "data_size": 63488 00:23:52.555 }, 00:23:52.555 { 00:23:52.555 "name": "BaseBdev4", 00:23:52.555 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:52.555 "is_configured": true, 00:23:52.555 "data_offset": 2048, 00:23:52.555 "data_size": 63488 00:23:52.555 } 00:23:52.555 ] 00:23:52.555 }' 00:23:52.555 10:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.555 10:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:53.120 10:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:53.378 [2024-07-25 10:38:56.873872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:53.378 [2024-07-25 10:38:56.873942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.378 [2024-07-25 10:38:56.873971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb00b40 00:23:53.378 [2024-07-25 10:38:56.873985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.378 [2024-07-25 10:38:56.874415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.378 [2024-07-25 10:38:56.874452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:53.378 [2024-07-25 10:38:56.874541] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:53.378 [2024-07-25 10:38:56.874564] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:23:53.378 [2024-07-25 10:38:56.874573] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:53.378 [2024-07-25 10:38:56.874594] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:53.378 [2024-07-25 10:38:56.879172] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb039e0 00:23:53.378 [2024-07-25 10:38:56.880587] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:53.378 spare 00:23:53.378 10:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:23:54.311 10:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.311 10:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:54.311 10:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:54.311 10:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:54.311 10:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:54.311 10:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.311 10:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.569 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:54.569 "name": "raid_bdev1", 00:23:54.569 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:54.569 "strip_size_kb": 0, 00:23:54.569 "state": "online", 00:23:54.569 "raid_level": "raid1", 00:23:54.569 "superblock": true, 00:23:54.569 "num_base_bdevs": 4, 00:23:54.569 "num_base_bdevs_discovered": 3, 00:23:54.569 "num_base_bdevs_operational": 3, 00:23:54.569 "process": { 00:23:54.569 "type": "rebuild", 00:23:54.569 "target": "spare", 00:23:54.569 "progress": { 00:23:54.569 "blocks": 24576, 00:23:54.569 "percent": 38 00:23:54.569 } 00:23:54.569 }, 00:23:54.569 "base_bdevs_list": [ 00:23:54.569 { 00:23:54.569 "name": "spare", 00:23:54.569 "uuid": "4dd856be-4041-590c-bf0c-390c5adaa45f", 00:23:54.569 "is_configured": true, 00:23:54.569 "data_offset": 2048, 00:23:54.569 "data_size": 63488 00:23:54.569 }, 00:23:54.569 { 00:23:54.569 "name": null, 00:23:54.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.569 "is_configured": false, 00:23:54.569 "data_offset": 2048, 00:23:54.569 "data_size": 63488 00:23:54.569 }, 00:23:54.569 { 00:23:54.569 "name": "BaseBdev3", 00:23:54.569 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:54.569 "is_configured": true, 00:23:54.569 "data_offset": 2048, 00:23:54.569 "data_size": 63488 00:23:54.569 }, 00:23:54.569 { 00:23:54.569 "name": "BaseBdev4", 00:23:54.569 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:54.569 "is_configured": true, 00:23:54.569 "data_offset": 2048, 00:23:54.569 "data_size": 63488 00:23:54.569 } 00:23:54.569 ] 00:23:54.569 }' 00:23:54.569 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:54.569 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:54.569 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:54.569 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:54.569 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:54.827 [2024-07-25 10:38:58.523656] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:55.086 [2024-07-25 10:38:58.594856] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:55.086 [2024-07-25 10:38:58.594922] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.086 [2024-07-25 10:38:58.594941] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:55.086 [2024-07-25 10:38:58.594950] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:55.086 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:55.086 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:55.086 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:55.086 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:55.086 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:55.086 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:55.086 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:55.086 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:55.086 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:55.086 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:55.086 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.086 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.344 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:55.344 "name": "raid_bdev1", 00:23:55.344 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:55.344 "strip_size_kb": 0, 00:23:55.344 "state": "online", 00:23:55.344 "raid_level": "raid1", 00:23:55.344 "superblock": true, 00:23:55.344 "num_base_bdevs": 4, 00:23:55.344 "num_base_bdevs_discovered": 2, 00:23:55.344 "num_base_bdevs_operational": 2, 00:23:55.344 "base_bdevs_list": [ 00:23:55.344 { 00:23:55.344 "name": null, 00:23:55.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.344 "is_configured": false, 00:23:55.344 "data_offset": 2048, 00:23:55.344 "data_size": 63488 00:23:55.344 }, 00:23:55.344 { 00:23:55.344 "name": null, 00:23:55.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.344 "is_configured": false, 00:23:55.344 "data_offset": 2048, 00:23:55.344 "data_size": 63488 00:23:55.344 }, 00:23:55.344 { 00:23:55.344 "name": "BaseBdev3", 00:23:55.344 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:55.344 "is_configured": true, 00:23:55.344 "data_offset": 2048, 00:23:55.344 "data_size": 63488 00:23:55.344 }, 00:23:55.344 { 00:23:55.344 "name": "BaseBdev4", 00:23:55.344 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:55.344 "is_configured": true, 00:23:55.344 "data_offset": 2048, 00:23:55.344 "data_size": 63488 00:23:55.344 } 00:23:55.344 ] 00:23:55.344 }' 00:23:55.344 10:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:55.344 10:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.910 10:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:55.910 10:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:55.910 10:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:55.910 10:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:55.910 10:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:55.910 10:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.910 10:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.169 10:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:56.169 "name": "raid_bdev1", 00:23:56.169 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:56.169 "strip_size_kb": 0, 00:23:56.169 "state": "online", 00:23:56.169 "raid_level": "raid1", 00:23:56.169 "superblock": true, 00:23:56.169 "num_base_bdevs": 4, 00:23:56.169 "num_base_bdevs_discovered": 2, 00:23:56.169 "num_base_bdevs_operational": 2, 00:23:56.169 "base_bdevs_list": [ 00:23:56.169 { 00:23:56.169 "name": null, 00:23:56.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.169 "is_configured": false, 00:23:56.169 "data_offset": 2048, 00:23:56.169 "data_size": 63488 00:23:56.169 }, 00:23:56.169 { 00:23:56.169 "name": null, 00:23:56.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.169 "is_configured": false, 00:23:56.169 "data_offset": 2048, 00:23:56.169 "data_size": 63488 00:23:56.169 }, 00:23:56.169 { 00:23:56.169 "name": "BaseBdev3", 00:23:56.169 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:56.169 "is_configured": true, 00:23:56.169 "data_offset": 2048, 00:23:56.169 "data_size": 63488 00:23:56.169 }, 00:23:56.169 { 00:23:56.169 "name": "BaseBdev4", 00:23:56.169 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:56.169 "is_configured": true, 00:23:56.169 "data_offset": 2048, 00:23:56.169 "data_size": 63488 00:23:56.169 } 00:23:56.169 ] 00:23:56.169 }' 00:23:56.169 10:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:56.169 10:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:56.169 10:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:56.169 10:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:56.169 10:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:56.428 10:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:56.686 [2024-07-25 10:39:00.240667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:56.686 [2024-07-25 10:39:00.240736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.686 [2024-07-25 10:39:00.240764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb09660 00:23:56.686 [2024-07-25 10:39:00.240779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.686 [2024-07-25 10:39:00.241228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.686 [2024-07-25 10:39:00.241252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:56.686 [2024-07-25 10:39:00.241337] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:56.686 [2024-07-25 10:39:00.241354] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:23:56.686 [2024-07-25 10:39:00.241363] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:56.686 BaseBdev1 00:23:56.686 10:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:23:57.620 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:57.620 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:57.620 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:57.620 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:57.620 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:57.620 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:57.620 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:57.620 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:57.620 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:57.620 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:57.620 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.620 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.878 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:57.878 "name": "raid_bdev1", 00:23:57.878 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:57.878 "strip_size_kb": 0, 00:23:57.878 "state": "online", 00:23:57.878 "raid_level": "raid1", 00:23:57.878 "superblock": true, 00:23:57.878 "num_base_bdevs": 4, 00:23:57.878 "num_base_bdevs_discovered": 2, 00:23:57.878 "num_base_bdevs_operational": 2, 00:23:57.878 "base_bdevs_list": [ 00:23:57.878 { 00:23:57.878 "name": null, 00:23:57.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.878 "is_configured": false, 00:23:57.878 "data_offset": 2048, 00:23:57.878 "data_size": 63488 00:23:57.878 }, 00:23:57.878 { 00:23:57.878 "name": null, 00:23:57.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.878 "is_configured": false, 00:23:57.878 "data_offset": 2048, 00:23:57.878 "data_size": 63488 00:23:57.878 }, 00:23:57.878 { 00:23:57.878 "name": "BaseBdev3", 00:23:57.878 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:57.878 "is_configured": true, 00:23:57.878 "data_offset": 2048, 00:23:57.878 "data_size": 63488 00:23:57.878 }, 00:23:57.878 { 00:23:57.878 "name": "BaseBdev4", 00:23:57.878 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:57.878 "is_configured": true, 00:23:57.878 "data_offset": 2048, 00:23:57.878 "data_size": 63488 00:23:57.878 } 00:23:57.878 ] 00:23:57.878 }' 00:23:57.878 10:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:57.879 10:39:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.444 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:58.444 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:58.444 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:58.444 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:58.444 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:58.444 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.444 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.702 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:58.702 "name": "raid_bdev1", 00:23:58.702 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:23:58.702 "strip_size_kb": 0, 00:23:58.702 "state": "online", 00:23:58.702 "raid_level": "raid1", 00:23:58.702 "superblock": true, 00:23:58.702 "num_base_bdevs": 4, 00:23:58.702 "num_base_bdevs_discovered": 2, 00:23:58.702 "num_base_bdevs_operational": 2, 00:23:58.702 "base_bdevs_list": [ 00:23:58.702 { 00:23:58.702 "name": null, 00:23:58.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.702 "is_configured": false, 00:23:58.702 "data_offset": 2048, 00:23:58.702 "data_size": 63488 00:23:58.702 }, 00:23:58.702 { 00:23:58.702 "name": null, 00:23:58.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.702 "is_configured": false, 00:23:58.702 "data_offset": 2048, 00:23:58.702 "data_size": 63488 00:23:58.702 }, 00:23:58.702 { 00:23:58.702 "name": "BaseBdev3", 00:23:58.702 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:23:58.702 "is_configured": true, 00:23:58.702 "data_offset": 2048, 00:23:58.702 "data_size": 63488 00:23:58.702 }, 00:23:58.702 { 00:23:58.702 "name": "BaseBdev4", 00:23:58.702 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:23:58.702 "is_configured": true, 00:23:58.702 "data_offset": 2048, 00:23:58.702 "data_size": 63488 00:23:58.702 } 00:23:58.702 ] 00:23:58.702 }' 00:23:58.702 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:58.702 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:58.702 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:58.960 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:58.960 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:58.960 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:23:58.960 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:58.960 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:58.960 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:58.960 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:58.960 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:58.960 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:58.960 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:58.960 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:58.960 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:58.960 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:59.218 [2024-07-25 10:39:02.711226] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:59.218 [2024-07-25 10:39:02.711420] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:23:59.218 [2024-07-25 10:39:02.711441] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:59.218 request: 00:23:59.218 { 00:23:59.218 "base_bdev": "BaseBdev1", 00:23:59.218 "raid_bdev": "raid_bdev1", 00:23:59.218 "method": "bdev_raid_add_base_bdev", 00:23:59.218 "req_id": 1 00:23:59.218 } 00:23:59.218 Got JSON-RPC error response 00:23:59.218 response: 00:23:59.218 { 00:23:59.218 "code": -22, 00:23:59.218 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:59.218 } 00:23:59.218 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:23:59.218 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:59.218 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:59.218 10:39:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:59.218 10:39:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:24:00.155 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:00.155 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:00.155 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:00.155 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:00.155 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:00.155 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:00.155 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:00.155 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:00.155 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:00.155 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:00.155 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.155 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.413 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:00.414 "name": "raid_bdev1", 00:24:00.414 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:24:00.414 "strip_size_kb": 0, 00:24:00.414 "state": "online", 00:24:00.414 "raid_level": "raid1", 00:24:00.414 "superblock": true, 00:24:00.414 "num_base_bdevs": 4, 00:24:00.414 "num_base_bdevs_discovered": 2, 00:24:00.414 "num_base_bdevs_operational": 2, 00:24:00.414 "base_bdevs_list": [ 00:24:00.414 { 00:24:00.414 "name": null, 00:24:00.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.414 "is_configured": false, 00:24:00.414 "data_offset": 2048, 00:24:00.414 "data_size": 63488 00:24:00.414 }, 00:24:00.414 { 00:24:00.414 "name": null, 00:24:00.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.414 "is_configured": false, 00:24:00.414 "data_offset": 2048, 00:24:00.414 "data_size": 63488 00:24:00.414 }, 00:24:00.414 { 00:24:00.414 "name": "BaseBdev3", 00:24:00.414 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:24:00.414 "is_configured": true, 00:24:00.414 "data_offset": 2048, 00:24:00.414 "data_size": 63488 00:24:00.414 }, 00:24:00.414 { 00:24:00.414 "name": "BaseBdev4", 00:24:00.414 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:24:00.414 "is_configured": true, 00:24:00.414 "data_offset": 2048, 00:24:00.414 "data_size": 63488 00:24:00.414 } 00:24:00.414 ] 00:24:00.414 }' 00:24:00.414 10:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:00.414 10:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:00.980 10:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:00.980 10:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:00.980 10:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:00.980 10:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:00.980 10:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:00.980 10:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.980 10:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:01.238 "name": "raid_bdev1", 00:24:01.238 "uuid": "bb73c2c5-fd41-430d-b946-3d0e0dc4e6a6", 00:24:01.238 "strip_size_kb": 0, 00:24:01.238 "state": "online", 00:24:01.238 "raid_level": "raid1", 00:24:01.238 "superblock": true, 00:24:01.238 "num_base_bdevs": 4, 00:24:01.238 "num_base_bdevs_discovered": 2, 00:24:01.238 "num_base_bdevs_operational": 2, 00:24:01.238 "base_bdevs_list": [ 00:24:01.238 { 00:24:01.238 "name": null, 00:24:01.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.238 "is_configured": false, 00:24:01.238 "data_offset": 2048, 00:24:01.238 "data_size": 63488 00:24:01.238 }, 00:24:01.238 { 00:24:01.238 "name": null, 00:24:01.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.238 "is_configured": false, 00:24:01.238 "data_offset": 2048, 00:24:01.238 "data_size": 63488 00:24:01.238 }, 00:24:01.238 { 00:24:01.238 "name": "BaseBdev3", 00:24:01.238 "uuid": "dd360cee-26f4-564b-a909-c4e524bc300d", 00:24:01.238 "is_configured": true, 00:24:01.238 "data_offset": 2048, 00:24:01.238 "data_size": 63488 00:24:01.238 }, 00:24:01.238 { 00:24:01.238 "name": "BaseBdev4", 00:24:01.238 "uuid": "e47582eb-4b20-5803-b128-455c6423f600", 00:24:01.238 "is_configured": true, 00:24:01.238 "data_offset": 2048, 00:24:01.238 "data_size": 63488 00:24:01.238 } 00:24:01.238 ] 00:24:01.238 }' 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2447845 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2447845 ']' 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 2447845 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2447845 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2447845' 00:24:01.238 killing process with pid 2447845 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 2447845 00:24:01.238 Received shutdown signal, test time was about 60.000000 seconds 00:24:01.238 00:24:01.238 Latency(us) 00:24:01.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.238 =================================================================================================================== 00:24:01.238 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:01.238 [2024-07-25 10:39:04.921632] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:01.238 10:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 2447845 00:24:01.238 [2024-07-25 10:39:04.921767] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:01.238 [2024-07-25 10:39:04.921846] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:01.238 [2024-07-25 10:39:04.921869] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb006d0 name raid_bdev1, state offline 00:24:01.497 [2024-07-25 10:39:04.985870] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:01.755 10:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:24:01.755 00:24:01.755 real 0m38.873s 00:24:01.755 user 0m55.884s 00:24:01.755 sys 0m6.621s 00:24:01.755 10:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:01.755 10:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.755 ************************************ 00:24:01.755 END TEST raid_rebuild_test_sb 00:24:01.755 ************************************ 00:24:01.755 10:39:05 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:24:01.755 10:39:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:01.756 10:39:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:01.756 10:39:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:01.756 ************************************ 00:24:01.756 START TEST raid_rebuild_test_io 00:24:01.756 ************************************ 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2453029 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2453029 /var/tmp/spdk-raid.sock 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 2453029 ']' 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:01.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.756 10:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:01.756 [2024-07-25 10:39:05.372767] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:24:01.756 [2024-07-25 10:39:05.372837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2453029 ] 00:24:01.756 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:01.756 Zero copy mechanism will not be used. 00:24:01.756 [2024-07-25 10:39:05.458540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.014 [2024-07-25 10:39:05.580490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.014 [2024-07-25 10:39:05.652392] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:02.014 [2024-07-25 10:39:05.652437] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:02.948 10:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.948 10:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:24:02.948 10:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:02.948 10:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:02.948 BaseBdev1_malloc 00:24:02.948 10:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:03.206 [2024-07-25 10:39:06.827865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:03.206 [2024-07-25 10:39:06.827925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.206 [2024-07-25 10:39:06.827950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2355430 00:24:03.206 [2024-07-25 10:39:06.827966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.206 [2024-07-25 10:39:06.829585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.206 [2024-07-25 10:39:06.829613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:03.206 BaseBdev1 00:24:03.206 10:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:03.206 10:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:03.464 BaseBdev2_malloc 00:24:03.464 10:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:03.745 [2024-07-25 10:39:07.347743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:03.745 [2024-07-25 10:39:07.347821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.745 [2024-07-25 10:39:07.347867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24f8a20 00:24:03.745 [2024-07-25 10:39:07.347882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.745 [2024-07-25 10:39:07.349730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.745 [2024-07-25 10:39:07.349754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:03.745 BaseBdev2 00:24:03.745 10:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:03.745 10:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:04.042 BaseBdev3_malloc 00:24:04.042 10:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:04.300 [2024-07-25 10:39:07.887683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:04.300 [2024-07-25 10:39:07.887753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.301 [2024-07-25 10:39:07.887783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24ef700 00:24:04.301 [2024-07-25 10:39:07.887798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.301 [2024-07-25 10:39:07.889583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.301 [2024-07-25 10:39:07.889622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:04.301 BaseBdev3 00:24:04.301 10:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:04.301 10:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:04.559 BaseBdev4_malloc 00:24:04.559 10:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:04.817 [2024-07-25 10:39:08.392714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:04.817 [2024-07-25 10:39:08.392771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.817 [2024-07-25 10:39:08.392796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24ed290 00:24:04.817 [2024-07-25 10:39:08.392812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.817 [2024-07-25 10:39:08.394183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.817 [2024-07-25 10:39:08.394211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:04.817 BaseBdev4 00:24:04.817 10:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:05.075 spare_malloc 00:24:05.075 10:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:05.333 spare_delay 00:24:05.333 10:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:05.591 [2024-07-25 10:39:09.190559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:05.591 [2024-07-25 10:39:09.190648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.591 [2024-07-25 10:39:09.190675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x234e330 00:24:05.591 [2024-07-25 10:39:09.190687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.591 [2024-07-25 10:39:09.192247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.591 [2024-07-25 10:39:09.192277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:05.591 spare 00:24:05.591 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:05.849 [2024-07-25 10:39:09.439260] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:05.849 [2024-07-25 10:39:09.440474] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:05.849 [2024-07-25 10:39:09.440529] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:05.849 [2024-07-25 10:39:09.440578] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:05.849 [2024-07-25 10:39:09.440689] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x234fa10 00:24:05.849 [2024-07-25 10:39:09.440702] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:05.849 [2024-07-25 10:39:09.440907] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x236c3f0 00:24:05.849 [2024-07-25 10:39:09.441065] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x234fa10 00:24:05.849 [2024-07-25 10:39:09.441078] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x234fa10 00:24:05.849 [2024-07-25 10:39:09.441252] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.849 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:05.849 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:05.849 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:05.849 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:05.849 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:05.849 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:05.849 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:05.849 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:05.849 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:05.849 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:05.849 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.849 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.108 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:06.108 "name": "raid_bdev1", 00:24:06.108 "uuid": "2e060238-84d1-4154-89e9-6ce857d27083", 00:24:06.108 "strip_size_kb": 0, 00:24:06.108 "state": "online", 00:24:06.108 "raid_level": "raid1", 00:24:06.108 "superblock": false, 00:24:06.108 "num_base_bdevs": 4, 00:24:06.108 "num_base_bdevs_discovered": 4, 00:24:06.108 "num_base_bdevs_operational": 4, 00:24:06.108 "base_bdevs_list": [ 00:24:06.108 { 00:24:06.108 "name": "BaseBdev1", 00:24:06.108 "uuid": "f3b0abca-bd3b-5387-b964-b4fc755141ed", 00:24:06.108 "is_configured": true, 00:24:06.108 "data_offset": 0, 00:24:06.108 "data_size": 65536 00:24:06.108 }, 00:24:06.108 { 00:24:06.108 "name": "BaseBdev2", 00:24:06.108 "uuid": "cf08cdd1-8dcc-502f-9e06-b95b82a52f7f", 00:24:06.108 "is_configured": true, 00:24:06.108 "data_offset": 0, 00:24:06.108 "data_size": 65536 00:24:06.108 }, 00:24:06.108 { 00:24:06.108 "name": "BaseBdev3", 00:24:06.108 "uuid": "028db13b-c53c-520b-81eb-54baac8445da", 00:24:06.108 "is_configured": true, 00:24:06.108 "data_offset": 0, 00:24:06.108 "data_size": 65536 00:24:06.108 }, 00:24:06.108 { 00:24:06.108 "name": "BaseBdev4", 00:24:06.108 "uuid": "aeacf647-c47b-5f2e-b4e9-7031af119544", 00:24:06.108 "is_configured": true, 00:24:06.108 "data_offset": 0, 00:24:06.108 "data_size": 65536 00:24:06.108 } 00:24:06.108 ] 00:24:06.108 }' 00:24:06.108 10:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:06.108 10:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:06.673 10:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:06.674 10:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:24:06.932 [2024-07-25 10:39:10.530507] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:06.932 10:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:24:06.932 10:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.932 10:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:07.189 10:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:24:07.189 10:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:24:07.189 10:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:07.189 10:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:07.447 [2024-07-25 10:39:10.933819] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2350830 00:24:07.447 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:07.447 Zero copy mechanism will not be used. 00:24:07.447 Running I/O for 60 seconds... 00:24:07.447 [2024-07-25 10:39:11.068209] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:07.447 [2024-07-25 10:39:11.080207] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2350830 00:24:07.447 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:07.447 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:07.447 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:07.447 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:07.447 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:07.447 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:07.447 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:07.447 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:07.447 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:07.447 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:07.447 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.447 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.705 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:07.705 "name": "raid_bdev1", 00:24:07.705 "uuid": "2e060238-84d1-4154-89e9-6ce857d27083", 00:24:07.705 "strip_size_kb": 0, 00:24:07.705 "state": "online", 00:24:07.705 "raid_level": "raid1", 00:24:07.705 "superblock": false, 00:24:07.705 "num_base_bdevs": 4, 00:24:07.705 "num_base_bdevs_discovered": 3, 00:24:07.705 "num_base_bdevs_operational": 3, 00:24:07.705 "base_bdevs_list": [ 00:24:07.705 { 00:24:07.705 "name": null, 00:24:07.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.705 "is_configured": false, 00:24:07.705 "data_offset": 0, 00:24:07.705 "data_size": 65536 00:24:07.705 }, 00:24:07.705 { 00:24:07.705 "name": "BaseBdev2", 00:24:07.705 "uuid": "cf08cdd1-8dcc-502f-9e06-b95b82a52f7f", 00:24:07.705 "is_configured": true, 00:24:07.705 "data_offset": 0, 00:24:07.705 "data_size": 65536 00:24:07.705 }, 00:24:07.705 { 00:24:07.705 "name": "BaseBdev3", 00:24:07.705 "uuid": "028db13b-c53c-520b-81eb-54baac8445da", 00:24:07.705 "is_configured": true, 00:24:07.705 "data_offset": 0, 00:24:07.705 "data_size": 65536 00:24:07.705 }, 00:24:07.705 { 00:24:07.705 "name": "BaseBdev4", 00:24:07.705 "uuid": "aeacf647-c47b-5f2e-b4e9-7031af119544", 00:24:07.705 "is_configured": true, 00:24:07.705 "data_offset": 0, 00:24:07.705 "data_size": 65536 00:24:07.705 } 00:24:07.705 ] 00:24:07.705 }' 00:24:07.705 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:07.705 10:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:08.271 10:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:08.530 [2024-07-25 10:39:12.168475] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:08.530 10:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:08.530 [2024-07-25 10:39:12.233055] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24e9e00 00:24:08.530 [2024-07-25 10:39:12.235336] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:08.788 [2024-07-25 10:39:12.345996] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:08.788 [2024-07-25 10:39:12.346398] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:08.788 [2024-07-25 10:39:12.465524] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:08.788 [2024-07-25 10:39:12.466371] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:09.354 [2024-07-25 10:39:12.845991] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:09.611 [2024-07-25 10:39:13.189564] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:09.611 10:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.611 10:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:09.611 10:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:09.611 10:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:09.611 10:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:09.611 10:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.611 10:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.611 [2024-07-25 10:39:13.318036] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:09.611 [2024-07-25 10:39:13.318790] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:09.869 10:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:09.869 "name": "raid_bdev1", 00:24:09.869 "uuid": "2e060238-84d1-4154-89e9-6ce857d27083", 00:24:09.869 "strip_size_kb": 0, 00:24:09.869 "state": "online", 00:24:09.869 "raid_level": "raid1", 00:24:09.869 "superblock": false, 00:24:09.869 "num_base_bdevs": 4, 00:24:09.869 "num_base_bdevs_discovered": 4, 00:24:09.869 "num_base_bdevs_operational": 4, 00:24:09.869 "process": { 00:24:09.869 "type": "rebuild", 00:24:09.869 "target": "spare", 00:24:09.869 "progress": { 00:24:09.869 "blocks": 16384, 00:24:09.869 "percent": 25 00:24:09.869 } 00:24:09.869 }, 00:24:09.869 "base_bdevs_list": [ 00:24:09.869 { 00:24:09.869 "name": "spare", 00:24:09.869 "uuid": "5f2a8402-53df-5915-9297-c1b3e570dc61", 00:24:09.869 "is_configured": true, 00:24:09.869 "data_offset": 0, 00:24:09.869 "data_size": 65536 00:24:09.869 }, 00:24:09.869 { 00:24:09.869 "name": "BaseBdev2", 00:24:09.869 "uuid": "cf08cdd1-8dcc-502f-9e06-b95b82a52f7f", 00:24:09.869 "is_configured": true, 00:24:09.869 "data_offset": 0, 00:24:09.869 "data_size": 65536 00:24:09.869 }, 00:24:09.869 { 00:24:09.869 "name": "BaseBdev3", 00:24:09.869 "uuid": "028db13b-c53c-520b-81eb-54baac8445da", 00:24:09.869 "is_configured": true, 00:24:09.869 "data_offset": 0, 00:24:09.869 "data_size": 65536 00:24:09.869 }, 00:24:09.869 { 00:24:09.869 "name": "BaseBdev4", 00:24:09.869 "uuid": "aeacf647-c47b-5f2e-b4e9-7031af119544", 00:24:09.869 "is_configured": true, 00:24:09.869 "data_offset": 0, 00:24:09.869 "data_size": 65536 00:24:09.869 } 00:24:09.869 ] 00:24:09.869 }' 00:24:09.869 10:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:09.869 10:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.869 10:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:10.126 10:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:10.126 10:39:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:10.126 [2024-07-25 10:39:13.663194] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:10.126 [2024-07-25 10:39:13.775162] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:10.126 [2024-07-25 10:39:13.775942] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:10.383 [2024-07-25 10:39:13.852328] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:10.384 [2024-07-25 10:39:13.898125] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:10.384 [2024-07-25 10:39:14.001384] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:10.384 [2024-07-25 10:39:14.012160] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.384 [2024-07-25 10:39:14.012199] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:10.384 [2024-07-25 10:39:14.012212] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:10.384 [2024-07-25 10:39:14.035777] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2350830 00:24:10.384 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:10.384 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:10.384 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:10.384 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:10.384 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:10.384 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:10.384 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:10.384 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:10.384 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:10.384 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:10.384 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.384 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.949 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:10.949 "name": "raid_bdev1", 00:24:10.949 "uuid": "2e060238-84d1-4154-89e9-6ce857d27083", 00:24:10.949 "strip_size_kb": 0, 00:24:10.949 "state": "online", 00:24:10.949 "raid_level": "raid1", 00:24:10.949 "superblock": false, 00:24:10.949 "num_base_bdevs": 4, 00:24:10.949 "num_base_bdevs_discovered": 3, 00:24:10.949 "num_base_bdevs_operational": 3, 00:24:10.949 "base_bdevs_list": [ 00:24:10.949 { 00:24:10.949 "name": null, 00:24:10.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.949 "is_configured": false, 00:24:10.949 "data_offset": 0, 00:24:10.949 "data_size": 65536 00:24:10.949 }, 00:24:10.949 { 00:24:10.949 "name": "BaseBdev2", 00:24:10.949 "uuid": "cf08cdd1-8dcc-502f-9e06-b95b82a52f7f", 00:24:10.949 "is_configured": true, 00:24:10.949 "data_offset": 0, 00:24:10.949 "data_size": 65536 00:24:10.949 }, 00:24:10.949 { 00:24:10.949 "name": "BaseBdev3", 00:24:10.949 "uuid": "028db13b-c53c-520b-81eb-54baac8445da", 00:24:10.949 "is_configured": true, 00:24:10.949 "data_offset": 0, 00:24:10.949 "data_size": 65536 00:24:10.949 }, 00:24:10.949 { 00:24:10.949 "name": "BaseBdev4", 00:24:10.949 "uuid": "aeacf647-c47b-5f2e-b4e9-7031af119544", 00:24:10.949 "is_configured": true, 00:24:10.949 "data_offset": 0, 00:24:10.949 "data_size": 65536 00:24:10.949 } 00:24:10.949 ] 00:24:10.949 }' 00:24:10.949 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:10.949 10:39:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:11.514 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:11.514 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:11.514 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:11.514 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:11.514 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:11.514 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.514 10:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.771 10:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:11.771 "name": "raid_bdev1", 00:24:11.771 "uuid": "2e060238-84d1-4154-89e9-6ce857d27083", 00:24:11.771 "strip_size_kb": 0, 00:24:11.771 "state": "online", 00:24:11.771 "raid_level": "raid1", 00:24:11.771 "superblock": false, 00:24:11.771 "num_base_bdevs": 4, 00:24:11.771 "num_base_bdevs_discovered": 3, 00:24:11.771 "num_base_bdevs_operational": 3, 00:24:11.771 "base_bdevs_list": [ 00:24:11.771 { 00:24:11.771 "name": null, 00:24:11.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.771 "is_configured": false, 00:24:11.771 "data_offset": 0, 00:24:11.771 "data_size": 65536 00:24:11.771 }, 00:24:11.771 { 00:24:11.771 "name": "BaseBdev2", 00:24:11.771 "uuid": "cf08cdd1-8dcc-502f-9e06-b95b82a52f7f", 00:24:11.771 "is_configured": true, 00:24:11.771 "data_offset": 0, 00:24:11.771 "data_size": 65536 00:24:11.771 }, 00:24:11.771 { 00:24:11.771 "name": "BaseBdev3", 00:24:11.771 "uuid": "028db13b-c53c-520b-81eb-54baac8445da", 00:24:11.771 "is_configured": true, 00:24:11.771 "data_offset": 0, 00:24:11.771 "data_size": 65536 00:24:11.771 }, 00:24:11.771 { 00:24:11.771 "name": "BaseBdev4", 00:24:11.771 "uuid": "aeacf647-c47b-5f2e-b4e9-7031af119544", 00:24:11.771 "is_configured": true, 00:24:11.771 "data_offset": 0, 00:24:11.771 "data_size": 65536 00:24:11.771 } 00:24:11.771 ] 00:24:11.771 }' 00:24:11.771 10:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:11.771 10:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:11.771 10:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:11.771 10:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:11.771 10:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:12.029 [2024-07-25 10:39:15.591424] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:12.029 10:39:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:12.029 [2024-07-25 10:39:15.681719] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24ec750 00:24:12.029 [2024-07-25 10:39:15.683386] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:12.287 [2024-07-25 10:39:15.812914] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:12.287 [2024-07-25 10:39:15.814518] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:12.544 [2024-07-25 10:39:16.053911] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:12.544 [2024-07-25 10:39:16.054804] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:12.801 [2024-07-25 10:39:16.398600] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:13.059 [2024-07-25 10:39:16.645262] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:13.059 10:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.059 10:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:13.059 10:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:13.059 10:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:13.059 10:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:13.059 10:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.059 10:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.316 10:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:13.316 "name": "raid_bdev1", 00:24:13.316 "uuid": "2e060238-84d1-4154-89e9-6ce857d27083", 00:24:13.316 "strip_size_kb": 0, 00:24:13.316 "state": "online", 00:24:13.316 "raid_level": "raid1", 00:24:13.316 "superblock": false, 00:24:13.316 "num_base_bdevs": 4, 00:24:13.316 "num_base_bdevs_discovered": 4, 00:24:13.316 "num_base_bdevs_operational": 4, 00:24:13.316 "process": { 00:24:13.316 "type": "rebuild", 00:24:13.316 "target": "spare", 00:24:13.316 "progress": { 00:24:13.316 "blocks": 12288, 00:24:13.316 "percent": 18 00:24:13.316 } 00:24:13.316 }, 00:24:13.316 "base_bdevs_list": [ 00:24:13.316 { 00:24:13.316 "name": "spare", 00:24:13.316 "uuid": "5f2a8402-53df-5915-9297-c1b3e570dc61", 00:24:13.316 "is_configured": true, 00:24:13.316 "data_offset": 0, 00:24:13.316 "data_size": 65536 00:24:13.316 }, 00:24:13.316 { 00:24:13.316 "name": "BaseBdev2", 00:24:13.316 "uuid": "cf08cdd1-8dcc-502f-9e06-b95b82a52f7f", 00:24:13.316 "is_configured": true, 00:24:13.316 "data_offset": 0, 00:24:13.316 "data_size": 65536 00:24:13.316 }, 00:24:13.316 { 00:24:13.316 "name": "BaseBdev3", 00:24:13.316 "uuid": "028db13b-c53c-520b-81eb-54baac8445da", 00:24:13.316 "is_configured": true, 00:24:13.316 "data_offset": 0, 00:24:13.316 "data_size": 65536 00:24:13.316 }, 00:24:13.316 { 00:24:13.316 "name": "BaseBdev4", 00:24:13.316 "uuid": "aeacf647-c47b-5f2e-b4e9-7031af119544", 00:24:13.316 "is_configured": true, 00:24:13.316 "data_offset": 0, 00:24:13.316 "data_size": 65536 00:24:13.316 } 00:24:13.316 ] 00:24:13.316 }' 00:24:13.316 10:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:13.316 [2024-07-25 10:39:16.983154] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:13.316 10:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:13.317 10:39:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:13.574 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:13.574 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:24:13.574 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:24:13.574 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:13.574 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:24:13.574 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:13.574 [2024-07-25 10:39:17.201128] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:13.574 [2024-07-25 10:39:17.201436] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:13.832 [2024-07-25 10:39:17.313698] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:13.832 [2024-07-25 10:39:17.533798] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x2350830 00:24:13.832 [2024-07-25 10:39:17.533857] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x24ec750 00:24:14.089 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:24:14.089 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:24:14.089 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:14.089 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:14.089 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:14.089 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:14.089 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:14.089 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.089 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.089 [2024-07-25 10:39:17.765370] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:14.347 "name": "raid_bdev1", 00:24:14.347 "uuid": "2e060238-84d1-4154-89e9-6ce857d27083", 00:24:14.347 "strip_size_kb": 0, 00:24:14.347 "state": "online", 00:24:14.347 "raid_level": "raid1", 00:24:14.347 "superblock": false, 00:24:14.347 "num_base_bdevs": 4, 00:24:14.347 "num_base_bdevs_discovered": 3, 00:24:14.347 "num_base_bdevs_operational": 3, 00:24:14.347 "process": { 00:24:14.347 "type": "rebuild", 00:24:14.347 "target": "spare", 00:24:14.347 "progress": { 00:24:14.347 "blocks": 22528, 00:24:14.347 "percent": 34 00:24:14.347 } 00:24:14.347 }, 00:24:14.347 "base_bdevs_list": [ 00:24:14.347 { 00:24:14.347 "name": "spare", 00:24:14.347 "uuid": "5f2a8402-53df-5915-9297-c1b3e570dc61", 00:24:14.347 "is_configured": true, 00:24:14.347 "data_offset": 0, 00:24:14.347 "data_size": 65536 00:24:14.347 }, 00:24:14.347 { 00:24:14.347 "name": null, 00:24:14.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.347 "is_configured": false, 00:24:14.347 "data_offset": 0, 00:24:14.347 "data_size": 65536 00:24:14.347 }, 00:24:14.347 { 00:24:14.347 "name": "BaseBdev3", 00:24:14.347 "uuid": "028db13b-c53c-520b-81eb-54baac8445da", 00:24:14.347 "is_configured": true, 00:24:14.347 "data_offset": 0, 00:24:14.347 "data_size": 65536 00:24:14.347 }, 00:24:14.347 { 00:24:14.347 "name": "BaseBdev4", 00:24:14.347 "uuid": "aeacf647-c47b-5f2e-b4e9-7031af119544", 00:24:14.347 "is_configured": true, 00:24:14.347 "data_offset": 0, 00:24:14.347 "data_size": 65536 00:24:14.347 } 00:24:14.347 ] 00:24:14.347 }' 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=921 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.347 10:39:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.605 [2024-07-25 10:39:18.091463] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:14.605 10:39:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:14.605 "name": "raid_bdev1", 00:24:14.605 "uuid": "2e060238-84d1-4154-89e9-6ce857d27083", 00:24:14.605 "strip_size_kb": 0, 00:24:14.605 "state": "online", 00:24:14.605 "raid_level": "raid1", 00:24:14.605 "superblock": false, 00:24:14.605 "num_base_bdevs": 4, 00:24:14.605 "num_base_bdevs_discovered": 3, 00:24:14.605 "num_base_bdevs_operational": 3, 00:24:14.605 "process": { 00:24:14.605 "type": "rebuild", 00:24:14.605 "target": "spare", 00:24:14.605 "progress": { 00:24:14.605 "blocks": 26624, 00:24:14.605 "percent": 40 00:24:14.605 } 00:24:14.605 }, 00:24:14.605 "base_bdevs_list": [ 00:24:14.605 { 00:24:14.605 "name": "spare", 00:24:14.605 "uuid": "5f2a8402-53df-5915-9297-c1b3e570dc61", 00:24:14.605 "is_configured": true, 00:24:14.605 "data_offset": 0, 00:24:14.605 "data_size": 65536 00:24:14.605 }, 00:24:14.605 { 00:24:14.605 "name": null, 00:24:14.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.605 "is_configured": false, 00:24:14.605 "data_offset": 0, 00:24:14.605 "data_size": 65536 00:24:14.605 }, 00:24:14.605 { 00:24:14.605 "name": "BaseBdev3", 00:24:14.605 "uuid": "028db13b-c53c-520b-81eb-54baac8445da", 00:24:14.605 "is_configured": true, 00:24:14.605 "data_offset": 0, 00:24:14.605 "data_size": 65536 00:24:14.605 }, 00:24:14.605 { 00:24:14.605 "name": "BaseBdev4", 00:24:14.605 "uuid": "aeacf647-c47b-5f2e-b4e9-7031af119544", 00:24:14.605 "is_configured": true, 00:24:14.605 "data_offset": 0, 00:24:14.605 "data_size": 65536 00:24:14.605 } 00:24:14.605 ] 00:24:14.605 }' 00:24:14.605 10:39:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:14.605 10:39:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:14.605 10:39:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:14.605 10:39:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.605 10:39:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:14.605 [2024-07-25 10:39:18.311484] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:15.171 [2024-07-25 10:39:18.646816] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:15.736 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:15.736 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:15.737 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:15.737 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:15.737 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:15.737 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:15.737 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.737 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.737 [2024-07-25 10:39:19.317329] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:15.737 [2024-07-25 10:39:19.436755] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:15.994 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:15.994 "name": "raid_bdev1", 00:24:15.994 "uuid": "2e060238-84d1-4154-89e9-6ce857d27083", 00:24:15.994 "strip_size_kb": 0, 00:24:15.994 "state": "online", 00:24:15.994 "raid_level": "raid1", 00:24:15.994 "superblock": false, 00:24:15.994 "num_base_bdevs": 4, 00:24:15.994 "num_base_bdevs_discovered": 3, 00:24:15.994 "num_base_bdevs_operational": 3, 00:24:15.994 "process": { 00:24:15.994 "type": "rebuild", 00:24:15.994 "target": "spare", 00:24:15.994 "progress": { 00:24:15.994 "blocks": 47104, 00:24:15.994 "percent": 71 00:24:15.994 } 00:24:15.994 }, 00:24:15.994 "base_bdevs_list": [ 00:24:15.994 { 00:24:15.994 "name": "spare", 00:24:15.994 "uuid": "5f2a8402-53df-5915-9297-c1b3e570dc61", 00:24:15.994 "is_configured": true, 00:24:15.994 "data_offset": 0, 00:24:15.994 "data_size": 65536 00:24:15.994 }, 00:24:15.994 { 00:24:15.994 "name": null, 00:24:15.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.994 "is_configured": false, 00:24:15.994 "data_offset": 0, 00:24:15.994 "data_size": 65536 00:24:15.994 }, 00:24:15.994 { 00:24:15.994 "name": "BaseBdev3", 00:24:15.994 "uuid": "028db13b-c53c-520b-81eb-54baac8445da", 00:24:15.994 "is_configured": true, 00:24:15.994 "data_offset": 0, 00:24:15.994 "data_size": 65536 00:24:15.994 }, 00:24:15.994 { 00:24:15.994 "name": "BaseBdev4", 00:24:15.994 "uuid": "aeacf647-c47b-5f2e-b4e9-7031af119544", 00:24:15.995 "is_configured": true, 00:24:15.995 "data_offset": 0, 00:24:15.995 "data_size": 65536 00:24:15.995 } 00:24:15.995 ] 00:24:15.995 }' 00:24:15.995 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:15.995 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.995 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:15.995 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.995 10:39:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:16.253 [2024-07-25 10:39:19.779554] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:24:16.253 [2024-07-25 10:39:19.780544] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:24:17.187 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:17.187 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.187 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:17.187 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:17.187 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:17.187 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:17.187 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.187 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.187 [2024-07-25 10:39:20.683373] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:17.187 [2024-07-25 10:39:20.790765] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:17.187 [2024-07-25 10:39:20.794892] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:17.445 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:17.445 "name": "raid_bdev1", 00:24:17.445 "uuid": "2e060238-84d1-4154-89e9-6ce857d27083", 00:24:17.445 "strip_size_kb": 0, 00:24:17.445 "state": "online", 00:24:17.445 "raid_level": "raid1", 00:24:17.445 "superblock": false, 00:24:17.445 "num_base_bdevs": 4, 00:24:17.445 "num_base_bdevs_discovered": 3, 00:24:17.445 "num_base_bdevs_operational": 3, 00:24:17.446 "base_bdevs_list": [ 00:24:17.446 { 00:24:17.446 "name": "spare", 00:24:17.446 "uuid": "5f2a8402-53df-5915-9297-c1b3e570dc61", 00:24:17.446 "is_configured": true, 00:24:17.446 "data_offset": 0, 00:24:17.446 "data_size": 65536 00:24:17.446 }, 00:24:17.446 { 00:24:17.446 "name": null, 00:24:17.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.446 "is_configured": false, 00:24:17.446 "data_offset": 0, 00:24:17.446 "data_size": 65536 00:24:17.446 }, 00:24:17.446 { 00:24:17.446 "name": "BaseBdev3", 00:24:17.446 "uuid": "028db13b-c53c-520b-81eb-54baac8445da", 00:24:17.446 "is_configured": true, 00:24:17.446 "data_offset": 0, 00:24:17.446 "data_size": 65536 00:24:17.446 }, 00:24:17.446 { 00:24:17.446 "name": "BaseBdev4", 00:24:17.446 "uuid": "aeacf647-c47b-5f2e-b4e9-7031af119544", 00:24:17.446 "is_configured": true, 00:24:17.446 "data_offset": 0, 00:24:17.446 "data_size": 65536 00:24:17.446 } 00:24:17.446 ] 00:24:17.446 }' 00:24:17.446 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:17.446 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:17.446 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:17.446 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:17.446 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:24:17.446 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:17.446 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:17.446 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:17.446 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:17.446 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:17.446 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.446 10:39:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.704 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:17.704 "name": "raid_bdev1", 00:24:17.704 "uuid": "2e060238-84d1-4154-89e9-6ce857d27083", 00:24:17.704 "strip_size_kb": 0, 00:24:17.704 "state": "online", 00:24:17.704 "raid_level": "raid1", 00:24:17.704 "superblock": false, 00:24:17.704 "num_base_bdevs": 4, 00:24:17.704 "num_base_bdevs_discovered": 3, 00:24:17.704 "num_base_bdevs_operational": 3, 00:24:17.704 "base_bdevs_list": [ 00:24:17.704 { 00:24:17.704 "name": "spare", 00:24:17.704 "uuid": "5f2a8402-53df-5915-9297-c1b3e570dc61", 00:24:17.704 "is_configured": true, 00:24:17.704 "data_offset": 0, 00:24:17.704 "data_size": 65536 00:24:17.704 }, 00:24:17.704 { 00:24:17.704 "name": null, 00:24:17.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.704 "is_configured": false, 00:24:17.704 "data_offset": 0, 00:24:17.704 "data_size": 65536 00:24:17.704 }, 00:24:17.704 { 00:24:17.704 "name": "BaseBdev3", 00:24:17.704 "uuid": "028db13b-c53c-520b-81eb-54baac8445da", 00:24:17.704 "is_configured": true, 00:24:17.704 "data_offset": 0, 00:24:17.704 "data_size": 65536 00:24:17.704 }, 00:24:17.704 { 00:24:17.704 "name": "BaseBdev4", 00:24:17.704 "uuid": "aeacf647-c47b-5f2e-b4e9-7031af119544", 00:24:17.704 "is_configured": true, 00:24:17.704 "data_offset": 0, 00:24:17.704 "data_size": 65536 00:24:17.704 } 00:24:17.704 ] 00:24:17.704 }' 00:24:17.704 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:17.704 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:17.704 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:17.704 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:17.705 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:17.705 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:17.705 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:17.705 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:17.705 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:17.705 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:17.705 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:17.705 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:17.705 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:17.705 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:17.705 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.705 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.962 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:17.962 "name": "raid_bdev1", 00:24:17.962 "uuid": "2e060238-84d1-4154-89e9-6ce857d27083", 00:24:17.962 "strip_size_kb": 0, 00:24:17.962 "state": "online", 00:24:17.962 "raid_level": "raid1", 00:24:17.962 "superblock": false, 00:24:17.962 "num_base_bdevs": 4, 00:24:17.962 "num_base_bdevs_discovered": 3, 00:24:17.962 "num_base_bdevs_operational": 3, 00:24:17.962 "base_bdevs_list": [ 00:24:17.962 { 00:24:17.962 "name": "spare", 00:24:17.962 "uuid": "5f2a8402-53df-5915-9297-c1b3e570dc61", 00:24:17.962 "is_configured": true, 00:24:17.962 "data_offset": 0, 00:24:17.962 "data_size": 65536 00:24:17.962 }, 00:24:17.962 { 00:24:17.962 "name": null, 00:24:17.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.962 "is_configured": false, 00:24:17.962 "data_offset": 0, 00:24:17.962 "data_size": 65536 00:24:17.962 }, 00:24:17.962 { 00:24:17.962 "name": "BaseBdev3", 00:24:17.962 "uuid": "028db13b-c53c-520b-81eb-54baac8445da", 00:24:17.962 "is_configured": true, 00:24:17.962 "data_offset": 0, 00:24:17.962 "data_size": 65536 00:24:17.962 }, 00:24:17.962 { 00:24:17.962 "name": "BaseBdev4", 00:24:17.962 "uuid": "aeacf647-c47b-5f2e-b4e9-7031af119544", 00:24:17.962 "is_configured": true, 00:24:17.962 "data_offset": 0, 00:24:17.962 "data_size": 65536 00:24:17.962 } 00:24:17.962 ] 00:24:17.962 }' 00:24:17.962 10:39:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:17.962 10:39:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:18.527 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:18.784 [2024-07-25 10:39:22.405188] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:18.784 [2024-07-25 10:39:22.405226] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:18.784 00:24:18.784 Latency(us) 00:24:18.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.784 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:18.784 raid_bdev1 : 11.47 92.77 278.31 0.00 0.00 14632.03 242.73 121168.78 00:24:18.784 =================================================================================================================== 00:24:18.784 Total : 92.77 278.31 0.00 0.00 14632.03 242.73 121168.78 00:24:18.784 [2024-07-25 10:39:22.436761] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.784 [2024-07-25 10:39:22.436794] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:18.784 [2024-07-25 10:39:22.436883] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:18.784 [2024-07-25 10:39:22.436913] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x234fa10 name raid_bdev1, state offline 00:24:18.784 0 00:24:18.784 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.784 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:24:19.041 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:19.041 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:19.041 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:19.041 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:19.041 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:19.041 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:19.041 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:19.041 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:19.041 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:19.041 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:19.041 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:19.041 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:19.041 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:19.298 /dev/nbd0 00:24:19.298 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:19.298 10:39:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:19.298 10:39:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:19.298 10:39:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:24:19.298 10:39:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:19.298 10:39:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:19.298 10:39:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:19.298 10:39:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:24:19.298 10:39:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:19.298 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:19.298 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:19.556 1+0 records in 00:24:19.556 1+0 records out 00:24:19.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229474 s, 17.8 MB/s 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:19.556 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:19.825 /dev/nbd1 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:19.825 1+0 records in 00:24:19.825 1+0 records out 00:24:19.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193155 s, 21.2 MB/s 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:19.825 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:20.089 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:20.346 /dev/nbd1 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:20.346 1+0 records in 00:24:20.346 1+0 records out 00:24:20.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202384 s, 20.2 MB/s 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:20.346 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:20.347 10:39:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:24:20.347 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:20.347 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:20.347 10:39:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:20.347 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:20.347 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:20.347 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:20.347 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:20.347 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:20.347 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:20.347 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:20.604 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2453029 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 2453029 ']' 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 2453029 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.862 10:39:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2453029 00:24:21.120 10:39:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:21.120 10:39:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:21.120 10:39:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2453029' 00:24:21.120 killing process with pid 2453029 00:24:21.120 10:39:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 2453029 00:24:21.120 Received shutdown signal, test time was about 13.609906 seconds 00:24:21.120 00:24:21.120 Latency(us) 00:24:21.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.120 =================================================================================================================== 00:24:21.120 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.120 [2024-07-25 10:39:24.579046] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:21.120 10:39:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 2453029 00:24:21.120 [2024-07-25 10:39:24.636096] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:24:21.382 00:24:21.382 real 0m19.613s 00:24:21.382 user 0m30.799s 00:24:21.382 sys 0m2.897s 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:21.382 ************************************ 00:24:21.382 END TEST raid_rebuild_test_io 00:24:21.382 ************************************ 00:24:21.382 10:39:24 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:24:21.382 10:39:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:21.382 10:39:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:21.382 10:39:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:21.382 ************************************ 00:24:21.382 START TEST raid_rebuild_test_sb_io 00:24:21.382 ************************************ 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2456044 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2456044 /var/tmp/spdk-raid.sock 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 2456044 ']' 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:21.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.382 10:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:21.382 [2024-07-25 10:39:25.043146] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:24:21.382 [2024-07-25 10:39:25.043236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2456044 ] 00:24:21.382 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:21.382 Zero copy mechanism will not be used. 00:24:21.698 [2024-07-25 10:39:25.125304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.698 [2024-07-25 10:39:25.247823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.698 [2024-07-25 10:39:25.320524] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:21.698 [2024-07-25 10:39:25.320571] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:22.631 10:39:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:22.631 10:39:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:24:22.631 10:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:22.631 10:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:22.631 BaseBdev1_malloc 00:24:22.631 10:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:22.888 [2024-07-25 10:39:26.487164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:22.888 [2024-07-25 10:39:26.487223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:22.888 [2024-07-25 10:39:26.487250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c7d430 00:24:22.888 [2024-07-25 10:39:26.487265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:22.888 [2024-07-25 10:39:26.488778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:22.888 [2024-07-25 10:39:26.488806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:22.888 BaseBdev1 00:24:22.888 10:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:22.888 10:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:23.146 BaseBdev2_malloc 00:24:23.146 10:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:23.403 [2024-07-25 10:39:26.999936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:23.403 [2024-07-25 10:39:26.999995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.403 [2024-07-25 10:39:27.000023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e20a20 00:24:23.403 [2024-07-25 10:39:27.000039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.403 [2024-07-25 10:39:27.001485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.403 [2024-07-25 10:39:27.001513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:23.403 BaseBdev2 00:24:23.403 10:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:23.403 10:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:23.660 BaseBdev3_malloc 00:24:23.660 10:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:23.917 [2024-07-25 10:39:27.500287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:23.917 [2024-07-25 10:39:27.500346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.917 [2024-07-25 10:39:27.500370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e17700 00:24:23.917 [2024-07-25 10:39:27.500385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.917 [2024-07-25 10:39:27.501768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.917 [2024-07-25 10:39:27.501796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:23.917 BaseBdev3 00:24:23.917 10:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:23.917 10:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:24.175 BaseBdev4_malloc 00:24:24.175 10:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:24.433 [2024-07-25 10:39:28.048714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:24.433 [2024-07-25 10:39:28.048784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:24.433 [2024-07-25 10:39:28.048814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e15290 00:24:24.433 [2024-07-25 10:39:28.048838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:24.433 [2024-07-25 10:39:28.050602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:24.433 [2024-07-25 10:39:28.050631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:24.433 BaseBdev4 00:24:24.433 10:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:24.691 spare_malloc 00:24:24.691 10:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:24.949 spare_delay 00:24:24.949 10:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:25.206 [2024-07-25 10:39:28.774268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:25.206 [2024-07-25 10:39:28.774321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.206 [2024-07-25 10:39:28.774344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c76330 00:24:25.206 [2024-07-25 10:39:28.774359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.206 [2024-07-25 10:39:28.775788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.206 [2024-07-25 10:39:28.775816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:25.206 spare 00:24:25.206 10:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:25.464 [2024-07-25 10:39:29.034987] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:25.464 [2024-07-25 10:39:29.036168] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:25.464 [2024-07-25 10:39:29.036232] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:25.464 [2024-07-25 10:39:29.036290] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:25.464 [2024-07-25 10:39:29.036510] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c77a10 00:24:25.464 [2024-07-25 10:39:29.036529] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:25.464 [2024-07-25 10:39:29.036711] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c943f0 00:24:25.464 [2024-07-25 10:39:29.036888] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c77a10 00:24:25.464 [2024-07-25 10:39:29.036905] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c77a10 00:24:25.464 [2024-07-25 10:39:29.037007] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:25.464 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:25.464 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:25.464 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:25.464 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:25.464 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:25.464 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:25.464 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:25.464 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:25.464 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:25.464 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:25.464 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.464 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.722 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:25.722 "name": "raid_bdev1", 00:24:25.722 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:25.722 "strip_size_kb": 0, 00:24:25.722 "state": "online", 00:24:25.722 "raid_level": "raid1", 00:24:25.722 "superblock": true, 00:24:25.722 "num_base_bdevs": 4, 00:24:25.723 "num_base_bdevs_discovered": 4, 00:24:25.723 "num_base_bdevs_operational": 4, 00:24:25.723 "base_bdevs_list": [ 00:24:25.723 { 00:24:25.723 "name": "BaseBdev1", 00:24:25.723 "uuid": "f4b80fc6-5989-59e6-880d-dcddb3efb14d", 00:24:25.723 "is_configured": true, 00:24:25.723 "data_offset": 2048, 00:24:25.723 "data_size": 63488 00:24:25.723 }, 00:24:25.723 { 00:24:25.723 "name": "BaseBdev2", 00:24:25.723 "uuid": "20eb9baa-5254-5679-8cb2-bf7e24a8ba3e", 00:24:25.723 "is_configured": true, 00:24:25.723 "data_offset": 2048, 00:24:25.723 "data_size": 63488 00:24:25.723 }, 00:24:25.723 { 00:24:25.723 "name": "BaseBdev3", 00:24:25.723 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:25.723 "is_configured": true, 00:24:25.723 "data_offset": 2048, 00:24:25.723 "data_size": 63488 00:24:25.723 }, 00:24:25.723 { 00:24:25.723 "name": "BaseBdev4", 00:24:25.723 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:25.723 "is_configured": true, 00:24:25.723 "data_offset": 2048, 00:24:25.723 "data_size": 63488 00:24:25.723 } 00:24:25.723 ] 00:24:25.723 }' 00:24:25.723 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:25.723 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:26.288 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:26.288 10:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:24:26.547 [2024-07-25 10:39:30.086122] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:26.547 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:24:26.547 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.547 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:26.806 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:24:26.806 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:24:26.806 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:26.806 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:26.806 [2024-07-25 10:39:30.497637] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c7e550 00:24:26.806 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:26.806 Zero copy mechanism will not be used. 00:24:26.806 Running I/O for 60 seconds... 00:24:27.064 [2024-07-25 10:39:30.622385] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:27.064 [2024-07-25 10:39:30.636976] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1c7e550 00:24:27.064 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:27.064 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:27.064 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:27.064 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:27.064 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:27.064 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:27.064 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:27.064 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:27.064 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:27.064 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:27.064 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.064 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.322 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:27.322 "name": "raid_bdev1", 00:24:27.322 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:27.322 "strip_size_kb": 0, 00:24:27.322 "state": "online", 00:24:27.322 "raid_level": "raid1", 00:24:27.322 "superblock": true, 00:24:27.322 "num_base_bdevs": 4, 00:24:27.322 "num_base_bdevs_discovered": 3, 00:24:27.322 "num_base_bdevs_operational": 3, 00:24:27.322 "base_bdevs_list": [ 00:24:27.322 { 00:24:27.322 "name": null, 00:24:27.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.322 "is_configured": false, 00:24:27.323 "data_offset": 2048, 00:24:27.323 "data_size": 63488 00:24:27.323 }, 00:24:27.323 { 00:24:27.323 "name": "BaseBdev2", 00:24:27.323 "uuid": "20eb9baa-5254-5679-8cb2-bf7e24a8ba3e", 00:24:27.323 "is_configured": true, 00:24:27.323 "data_offset": 2048, 00:24:27.323 "data_size": 63488 00:24:27.323 }, 00:24:27.323 { 00:24:27.323 "name": "BaseBdev3", 00:24:27.323 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:27.323 "is_configured": true, 00:24:27.323 "data_offset": 2048, 00:24:27.323 "data_size": 63488 00:24:27.323 }, 00:24:27.323 { 00:24:27.323 "name": "BaseBdev4", 00:24:27.323 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:27.323 "is_configured": true, 00:24:27.323 "data_offset": 2048, 00:24:27.323 "data_size": 63488 00:24:27.323 } 00:24:27.323 ] 00:24:27.323 }' 00:24:27.323 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:27.323 10:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:27.888 10:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:28.146 [2024-07-25 10:39:31.779549] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:28.146 10:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:28.146 [2024-07-25 10:39:31.835736] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e14860 00:24:28.146 [2024-07-25 10:39:31.837729] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:28.403 [2024-07-25 10:39:31.953440] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:28.403 [2024-07-25 10:39:31.955008] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:28.661 [2024-07-25 10:39:32.159208] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:28.661 [2024-07-25 10:39:32.159939] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:28.919 [2024-07-25 10:39:32.532814] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:29.220 [2024-07-25 10:39:32.679402] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:29.220 [2024-07-25 10:39:32.680260] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:29.220 10:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:29.220 10:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:29.220 10:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:29.220 10:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:29.220 10:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:29.220 10:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.220 10:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.478 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:29.478 "name": "raid_bdev1", 00:24:29.478 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:29.478 "strip_size_kb": 0, 00:24:29.478 "state": "online", 00:24:29.478 "raid_level": "raid1", 00:24:29.478 "superblock": true, 00:24:29.478 "num_base_bdevs": 4, 00:24:29.478 "num_base_bdevs_discovered": 4, 00:24:29.478 "num_base_bdevs_operational": 4, 00:24:29.478 "process": { 00:24:29.478 "type": "rebuild", 00:24:29.478 "target": "spare", 00:24:29.478 "progress": { 00:24:29.478 "blocks": 14336, 00:24:29.478 "percent": 22 00:24:29.478 } 00:24:29.478 }, 00:24:29.478 "base_bdevs_list": [ 00:24:29.478 { 00:24:29.478 "name": "spare", 00:24:29.478 "uuid": "fd3b38ec-d7c4-5a02-80e3-c949a5e4cf8c", 00:24:29.478 "is_configured": true, 00:24:29.478 "data_offset": 2048, 00:24:29.478 "data_size": 63488 00:24:29.478 }, 00:24:29.478 { 00:24:29.478 "name": "BaseBdev2", 00:24:29.478 "uuid": "20eb9baa-5254-5679-8cb2-bf7e24a8ba3e", 00:24:29.478 "is_configured": true, 00:24:29.478 "data_offset": 2048, 00:24:29.478 "data_size": 63488 00:24:29.478 }, 00:24:29.478 { 00:24:29.478 "name": "BaseBdev3", 00:24:29.478 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:29.478 "is_configured": true, 00:24:29.478 "data_offset": 2048, 00:24:29.478 "data_size": 63488 00:24:29.478 }, 00:24:29.478 { 00:24:29.478 "name": "BaseBdev4", 00:24:29.478 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:29.478 "is_configured": true, 00:24:29.478 "data_offset": 2048, 00:24:29.478 "data_size": 63488 00:24:29.478 } 00:24:29.478 ] 00:24:29.478 }' 00:24:29.478 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:29.478 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:29.478 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:29.478 [2024-07-25 10:39:33.154632] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:29.478 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:29.478 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:29.736 [2024-07-25 10:39:33.373525] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:29.994 [2024-07-25 10:39:33.488302] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:29.994 [2024-07-25 10:39:33.502577] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:29.994 [2024-07-25 10:39:33.502643] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:29.994 [2024-07-25 10:39:33.502662] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:29.994 [2024-07-25 10:39:33.526429] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1c7e550 00:24:29.994 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:29.994 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:29.994 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:29.994 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:29.994 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:29.994 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:29.994 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:29.994 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:29.994 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:29.994 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:29.994 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.994 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.252 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:30.252 "name": "raid_bdev1", 00:24:30.252 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:30.252 "strip_size_kb": 0, 00:24:30.252 "state": "online", 00:24:30.252 "raid_level": "raid1", 00:24:30.252 "superblock": true, 00:24:30.252 "num_base_bdevs": 4, 00:24:30.252 "num_base_bdevs_discovered": 3, 00:24:30.252 "num_base_bdevs_operational": 3, 00:24:30.252 "base_bdevs_list": [ 00:24:30.252 { 00:24:30.252 "name": null, 00:24:30.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.252 "is_configured": false, 00:24:30.252 "data_offset": 2048, 00:24:30.252 "data_size": 63488 00:24:30.252 }, 00:24:30.252 { 00:24:30.252 "name": "BaseBdev2", 00:24:30.252 "uuid": "20eb9baa-5254-5679-8cb2-bf7e24a8ba3e", 00:24:30.252 "is_configured": true, 00:24:30.252 "data_offset": 2048, 00:24:30.252 "data_size": 63488 00:24:30.252 }, 00:24:30.252 { 00:24:30.252 "name": "BaseBdev3", 00:24:30.252 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:30.252 "is_configured": true, 00:24:30.252 "data_offset": 2048, 00:24:30.252 "data_size": 63488 00:24:30.252 }, 00:24:30.252 { 00:24:30.252 "name": "BaseBdev4", 00:24:30.252 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:30.252 "is_configured": true, 00:24:30.252 "data_offset": 2048, 00:24:30.252 "data_size": 63488 00:24:30.252 } 00:24:30.252 ] 00:24:30.252 }' 00:24:30.252 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:30.252 10:39:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:30.818 10:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:30.818 10:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:30.818 10:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:30.818 10:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:30.818 10:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:30.818 10:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.818 10:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.076 10:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:31.076 "name": "raid_bdev1", 00:24:31.076 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:31.076 "strip_size_kb": 0, 00:24:31.076 "state": "online", 00:24:31.076 "raid_level": "raid1", 00:24:31.076 "superblock": true, 00:24:31.076 "num_base_bdevs": 4, 00:24:31.076 "num_base_bdevs_discovered": 3, 00:24:31.076 "num_base_bdevs_operational": 3, 00:24:31.076 "base_bdevs_list": [ 00:24:31.076 { 00:24:31.076 "name": null, 00:24:31.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.076 "is_configured": false, 00:24:31.076 "data_offset": 2048, 00:24:31.076 "data_size": 63488 00:24:31.076 }, 00:24:31.076 { 00:24:31.076 "name": "BaseBdev2", 00:24:31.076 "uuid": "20eb9baa-5254-5679-8cb2-bf7e24a8ba3e", 00:24:31.076 "is_configured": true, 00:24:31.076 "data_offset": 2048, 00:24:31.076 "data_size": 63488 00:24:31.076 }, 00:24:31.076 { 00:24:31.076 "name": "BaseBdev3", 00:24:31.076 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:31.076 "is_configured": true, 00:24:31.076 "data_offset": 2048, 00:24:31.076 "data_size": 63488 00:24:31.076 }, 00:24:31.076 { 00:24:31.076 "name": "BaseBdev4", 00:24:31.076 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:31.076 "is_configured": true, 00:24:31.076 "data_offset": 2048, 00:24:31.076 "data_size": 63488 00:24:31.076 } 00:24:31.076 ] 00:24:31.076 }' 00:24:31.076 10:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:31.076 10:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:31.076 10:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:31.334 10:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:31.334 10:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:31.591 [2024-07-25 10:39:35.049284] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:31.591 10:39:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:31.591 [2024-07-25 10:39:35.102474] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c7bd40 00:24:31.592 [2024-07-25 10:39:35.104039] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:31.592 [2024-07-25 10:39:35.224273] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:31.592 [2024-07-25 10:39:35.225686] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:31.849 [2024-07-25 10:39:35.485815] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:32.415 [2024-07-25 10:39:35.855996] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:32.415 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:32.415 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:32.415 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:32.415 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:32.415 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:32.415 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.415 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.673 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:32.673 "name": "raid_bdev1", 00:24:32.673 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:32.673 "strip_size_kb": 0, 00:24:32.673 "state": "online", 00:24:32.673 "raid_level": "raid1", 00:24:32.673 "superblock": true, 00:24:32.673 "num_base_bdevs": 4, 00:24:32.673 "num_base_bdevs_discovered": 4, 00:24:32.673 "num_base_bdevs_operational": 4, 00:24:32.673 "process": { 00:24:32.673 "type": "rebuild", 00:24:32.673 "target": "spare", 00:24:32.673 "progress": { 00:24:32.673 "blocks": 14336, 00:24:32.673 "percent": 22 00:24:32.673 } 00:24:32.673 }, 00:24:32.673 "base_bdevs_list": [ 00:24:32.673 { 00:24:32.673 "name": "spare", 00:24:32.673 "uuid": "fd3b38ec-d7c4-5a02-80e3-c949a5e4cf8c", 00:24:32.673 "is_configured": true, 00:24:32.673 "data_offset": 2048, 00:24:32.673 "data_size": 63488 00:24:32.673 }, 00:24:32.673 { 00:24:32.673 "name": "BaseBdev2", 00:24:32.673 "uuid": "20eb9baa-5254-5679-8cb2-bf7e24a8ba3e", 00:24:32.673 "is_configured": true, 00:24:32.673 "data_offset": 2048, 00:24:32.673 "data_size": 63488 00:24:32.673 }, 00:24:32.673 { 00:24:32.673 "name": "BaseBdev3", 00:24:32.673 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:32.673 "is_configured": true, 00:24:32.673 "data_offset": 2048, 00:24:32.673 "data_size": 63488 00:24:32.673 }, 00:24:32.673 { 00:24:32.673 "name": "BaseBdev4", 00:24:32.673 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:32.673 "is_configured": true, 00:24:32.673 "data_offset": 2048, 00:24:32.673 "data_size": 63488 00:24:32.673 } 00:24:32.673 ] 00:24:32.673 }' 00:24:32.673 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:32.673 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:32.673 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:32.931 [2024-07-25 10:39:36.403477] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:32.931 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:32.931 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:24:32.931 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:24:32.931 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:24:32.931 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:24:32.931 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:32.931 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:24:32.931 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:33.189 [2024-07-25 10:39:36.647621] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:33.189 [2024-07-25 10:39:36.732331] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:33.189 [2024-07-25 10:39:36.873094] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1c7e550 00:24:33.189 [2024-07-25 10:39:36.873132] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1c7bd40 00:24:33.447 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:24:33.447 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:24:33.447 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:33.447 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:33.447 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:33.447 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:33.447 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:33.447 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.447 10:39:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.447 [2024-07-25 10:39:36.999313] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:33.705 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:33.705 "name": "raid_bdev1", 00:24:33.705 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:33.705 "strip_size_kb": 0, 00:24:33.705 "state": "online", 00:24:33.705 "raid_level": "raid1", 00:24:33.705 "superblock": true, 00:24:33.705 "num_base_bdevs": 4, 00:24:33.705 "num_base_bdevs_discovered": 3, 00:24:33.706 "num_base_bdevs_operational": 3, 00:24:33.706 "process": { 00:24:33.706 "type": "rebuild", 00:24:33.706 "target": "spare", 00:24:33.706 "progress": { 00:24:33.706 "blocks": 22528, 00:24:33.706 "percent": 35 00:24:33.706 } 00:24:33.706 }, 00:24:33.706 "base_bdevs_list": [ 00:24:33.706 { 00:24:33.706 "name": "spare", 00:24:33.706 "uuid": "fd3b38ec-d7c4-5a02-80e3-c949a5e4cf8c", 00:24:33.706 "is_configured": true, 00:24:33.706 "data_offset": 2048, 00:24:33.706 "data_size": 63488 00:24:33.706 }, 00:24:33.706 { 00:24:33.706 "name": null, 00:24:33.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.706 "is_configured": false, 00:24:33.706 "data_offset": 2048, 00:24:33.706 "data_size": 63488 00:24:33.706 }, 00:24:33.706 { 00:24:33.706 "name": "BaseBdev3", 00:24:33.706 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:33.706 "is_configured": true, 00:24:33.706 "data_offset": 2048, 00:24:33.706 "data_size": 63488 00:24:33.706 }, 00:24:33.706 { 00:24:33.706 "name": "BaseBdev4", 00:24:33.706 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:33.706 "is_configured": true, 00:24:33.706 "data_offset": 2048, 00:24:33.706 "data_size": 63488 00:24:33.706 } 00:24:33.706 ] 00:24:33.706 }' 00:24:33.706 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:33.706 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:33.706 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:33.706 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:33.706 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=941 00:24:33.706 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:33.706 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:33.706 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:33.706 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:33.706 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:33.706 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:33.706 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.706 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.706 [2024-07-25 10:39:37.269199] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:33.964 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:33.964 "name": "raid_bdev1", 00:24:33.964 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:33.964 "strip_size_kb": 0, 00:24:33.964 "state": "online", 00:24:33.964 "raid_level": "raid1", 00:24:33.964 "superblock": true, 00:24:33.964 "num_base_bdevs": 4, 00:24:33.964 "num_base_bdevs_discovered": 3, 00:24:33.964 "num_base_bdevs_operational": 3, 00:24:33.964 "process": { 00:24:33.964 "type": "rebuild", 00:24:33.964 "target": "spare", 00:24:33.964 "progress": { 00:24:33.964 "blocks": 26624, 00:24:33.964 "percent": 41 00:24:33.964 } 00:24:33.964 }, 00:24:33.964 "base_bdevs_list": [ 00:24:33.964 { 00:24:33.964 "name": "spare", 00:24:33.964 "uuid": "fd3b38ec-d7c4-5a02-80e3-c949a5e4cf8c", 00:24:33.964 "is_configured": true, 00:24:33.964 "data_offset": 2048, 00:24:33.964 "data_size": 63488 00:24:33.964 }, 00:24:33.964 { 00:24:33.964 "name": null, 00:24:33.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.965 "is_configured": false, 00:24:33.965 "data_offset": 2048, 00:24:33.965 "data_size": 63488 00:24:33.965 }, 00:24:33.965 { 00:24:33.965 "name": "BaseBdev3", 00:24:33.965 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:33.965 "is_configured": true, 00:24:33.965 "data_offset": 2048, 00:24:33.965 "data_size": 63488 00:24:33.965 }, 00:24:33.965 { 00:24:33.965 "name": "BaseBdev4", 00:24:33.965 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:33.965 "is_configured": true, 00:24:33.965 "data_offset": 2048, 00:24:33.965 "data_size": 63488 00:24:33.965 } 00:24:33.965 ] 00:24:33.965 }' 00:24:33.965 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:33.965 [2024-07-25 10:39:37.488878] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:33.965 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:33.965 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:33.965 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:33.965 10:39:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:34.531 [2024-07-25 10:39:37.958243] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:34.789 [2024-07-25 10:39:38.415407] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:35.047 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:35.047 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:35.047 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:35.047 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:35.047 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:35.047 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:35.047 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.047 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.306 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:35.306 "name": "raid_bdev1", 00:24:35.306 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:35.306 "strip_size_kb": 0, 00:24:35.306 "state": "online", 00:24:35.306 "raid_level": "raid1", 00:24:35.306 "superblock": true, 00:24:35.306 "num_base_bdevs": 4, 00:24:35.306 "num_base_bdevs_discovered": 3, 00:24:35.306 "num_base_bdevs_operational": 3, 00:24:35.306 "process": { 00:24:35.306 "type": "rebuild", 00:24:35.306 "target": "spare", 00:24:35.306 "progress": { 00:24:35.306 "blocks": 45056, 00:24:35.306 "percent": 70 00:24:35.306 } 00:24:35.306 }, 00:24:35.306 "base_bdevs_list": [ 00:24:35.306 { 00:24:35.306 "name": "spare", 00:24:35.306 "uuid": "fd3b38ec-d7c4-5a02-80e3-c949a5e4cf8c", 00:24:35.306 "is_configured": true, 00:24:35.306 "data_offset": 2048, 00:24:35.306 "data_size": 63488 00:24:35.306 }, 00:24:35.306 { 00:24:35.306 "name": null, 00:24:35.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.306 "is_configured": false, 00:24:35.306 "data_offset": 2048, 00:24:35.306 "data_size": 63488 00:24:35.306 }, 00:24:35.306 { 00:24:35.306 "name": "BaseBdev3", 00:24:35.306 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:35.306 "is_configured": true, 00:24:35.306 "data_offset": 2048, 00:24:35.306 "data_size": 63488 00:24:35.306 }, 00:24:35.306 { 00:24:35.306 "name": "BaseBdev4", 00:24:35.306 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:35.306 "is_configured": true, 00:24:35.306 "data_offset": 2048, 00:24:35.306 "data_size": 63488 00:24:35.306 } 00:24:35.306 ] 00:24:35.306 }' 00:24:35.306 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:35.306 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:35.306 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:35.306 [2024-07-25 10:39:38.864408] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:35.306 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:35.306 10:39:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:35.564 [2024-07-25 10:39:39.093072] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:24:36.131 [2024-07-25 10:39:39.810263] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:36.389 [2024-07-25 10:39:39.874056] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:36.389 [2024-07-25 10:39:39.876706] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.389 10:39:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:36.389 10:39:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:36.389 10:39:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:36.389 10:39:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:36.389 10:39:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:36.389 10:39:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:36.389 10:39:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.389 10:39:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.646 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:36.646 "name": "raid_bdev1", 00:24:36.646 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:36.646 "strip_size_kb": 0, 00:24:36.646 "state": "online", 00:24:36.646 "raid_level": "raid1", 00:24:36.646 "superblock": true, 00:24:36.646 "num_base_bdevs": 4, 00:24:36.646 "num_base_bdevs_discovered": 3, 00:24:36.646 "num_base_bdevs_operational": 3, 00:24:36.646 "base_bdevs_list": [ 00:24:36.646 { 00:24:36.646 "name": "spare", 00:24:36.646 "uuid": "fd3b38ec-d7c4-5a02-80e3-c949a5e4cf8c", 00:24:36.646 "is_configured": true, 00:24:36.646 "data_offset": 2048, 00:24:36.646 "data_size": 63488 00:24:36.646 }, 00:24:36.646 { 00:24:36.646 "name": null, 00:24:36.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.646 "is_configured": false, 00:24:36.646 "data_offset": 2048, 00:24:36.647 "data_size": 63488 00:24:36.647 }, 00:24:36.647 { 00:24:36.647 "name": "BaseBdev3", 00:24:36.647 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:36.647 "is_configured": true, 00:24:36.647 "data_offset": 2048, 00:24:36.647 "data_size": 63488 00:24:36.647 }, 00:24:36.647 { 00:24:36.647 "name": "BaseBdev4", 00:24:36.647 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:36.647 "is_configured": true, 00:24:36.647 "data_offset": 2048, 00:24:36.647 "data_size": 63488 00:24:36.647 } 00:24:36.647 ] 00:24:36.647 }' 00:24:36.647 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:36.647 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:36.647 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:36.647 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:36.647 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:24:36.647 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:36.647 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:36.647 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:36.647 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:36.647 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:36.647 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.647 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.904 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:36.904 "name": "raid_bdev1", 00:24:36.904 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:36.904 "strip_size_kb": 0, 00:24:36.904 "state": "online", 00:24:36.904 "raid_level": "raid1", 00:24:36.904 "superblock": true, 00:24:36.904 "num_base_bdevs": 4, 00:24:36.904 "num_base_bdevs_discovered": 3, 00:24:36.904 "num_base_bdevs_operational": 3, 00:24:36.904 "base_bdevs_list": [ 00:24:36.904 { 00:24:36.904 "name": "spare", 00:24:36.904 "uuid": "fd3b38ec-d7c4-5a02-80e3-c949a5e4cf8c", 00:24:36.904 "is_configured": true, 00:24:36.904 "data_offset": 2048, 00:24:36.904 "data_size": 63488 00:24:36.904 }, 00:24:36.905 { 00:24:36.905 "name": null, 00:24:36.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.905 "is_configured": false, 00:24:36.905 "data_offset": 2048, 00:24:36.905 "data_size": 63488 00:24:36.905 }, 00:24:36.905 { 00:24:36.905 "name": "BaseBdev3", 00:24:36.905 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:36.905 "is_configured": true, 00:24:36.905 "data_offset": 2048, 00:24:36.905 "data_size": 63488 00:24:36.905 }, 00:24:36.905 { 00:24:36.905 "name": "BaseBdev4", 00:24:36.905 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:36.905 "is_configured": true, 00:24:36.905 "data_offset": 2048, 00:24:36.905 "data_size": 63488 00:24:36.905 } 00:24:36.905 ] 00:24:36.905 }' 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.905 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.162 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:37.162 "name": "raid_bdev1", 00:24:37.162 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:37.162 "strip_size_kb": 0, 00:24:37.162 "state": "online", 00:24:37.162 "raid_level": "raid1", 00:24:37.162 "superblock": true, 00:24:37.162 "num_base_bdevs": 4, 00:24:37.162 "num_base_bdevs_discovered": 3, 00:24:37.162 "num_base_bdevs_operational": 3, 00:24:37.162 "base_bdevs_list": [ 00:24:37.162 { 00:24:37.162 "name": "spare", 00:24:37.162 "uuid": "fd3b38ec-d7c4-5a02-80e3-c949a5e4cf8c", 00:24:37.162 "is_configured": true, 00:24:37.162 "data_offset": 2048, 00:24:37.162 "data_size": 63488 00:24:37.162 }, 00:24:37.162 { 00:24:37.163 "name": null, 00:24:37.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.163 "is_configured": false, 00:24:37.163 "data_offset": 2048, 00:24:37.163 "data_size": 63488 00:24:37.163 }, 00:24:37.163 { 00:24:37.163 "name": "BaseBdev3", 00:24:37.163 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:37.163 "is_configured": true, 00:24:37.163 "data_offset": 2048, 00:24:37.163 "data_size": 63488 00:24:37.163 }, 00:24:37.163 { 00:24:37.163 "name": "BaseBdev4", 00:24:37.163 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:37.163 "is_configured": true, 00:24:37.163 "data_offset": 2048, 00:24:37.163 "data_size": 63488 00:24:37.163 } 00:24:37.163 ] 00:24:37.163 }' 00:24:37.163 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:37.163 10:39:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:37.729 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:37.986 [2024-07-25 10:39:41.575165] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:37.986 [2024-07-25 10:39:41.575207] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:37.986 00:24:37.986 Latency(us) 00:24:37.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.986 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:37.986 raid_bdev1 : 11.10 94.99 284.96 0.00 0.00 14390.72 233.62 113401.55 00:24:37.986 =================================================================================================================== 00:24:37.986 Total : 94.99 284.96 0.00 0.00 14390.72 233.62 113401.55 00:24:37.986 [2024-07-25 10:39:41.627321] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:37.986 [2024-07-25 10:39:41.627369] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:37.986 [2024-07-25 10:39:41.627457] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:37.986 [2024-07-25 10:39:41.627471] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c77a10 name raid_bdev1, state offline 00:24:37.986 0 00:24:37.986 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.986 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:24:38.244 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:38.244 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:38.244 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:38.244 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:38.244 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:38.244 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:38.244 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:38.244 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:38.244 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:38.244 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:38.244 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:38.244 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:38.244 10:39:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:38.501 /dev/nbd0 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:38.501 1+0 records in 00:24:38.501 1+0 records out 00:24:38.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219353 s, 18.7 MB/s 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:38.501 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:38.759 /dev/nbd1 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:38.759 1+0 records in 00:24:38.759 1+0 records out 00:24:38.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212635 s, 19.3 MB/s 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:38.759 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:39.020 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:39.020 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:39.020 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:39.020 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:39.020 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:39.020 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:39.020 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:39.317 10:39:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:39.317 /dev/nbd1 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:39.575 1+0 records in 00:24:39.575 1+0 records out 00:24:39.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170335 s, 24.0 MB/s 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:39.575 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:39.833 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:40.090 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:40.090 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:40.090 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:40.090 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:40.090 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:40.090 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:40.090 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:40.090 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:40.090 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:24:40.090 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:40.348 10:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:40.606 [2024-07-25 10:39:44.100080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:40.606 [2024-07-25 10:39:44.100152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.606 [2024-07-25 10:39:44.100183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c79f90 00:24:40.606 [2024-07-25 10:39:44.100199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.606 [2024-07-25 10:39:44.101979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.606 [2024-07-25 10:39:44.102008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:40.606 [2024-07-25 10:39:44.102123] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:40.606 [2024-07-25 10:39:44.102163] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:40.606 [2024-07-25 10:39:44.102300] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:40.606 [2024-07-25 10:39:44.102393] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:40.606 spare 00:24:40.606 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:40.606 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:40.606 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:40.606 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:40.606 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:40.606 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:40.606 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:40.606 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:40.606 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:40.606 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:40.606 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.606 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.606 [2024-07-25 10:39:44.202730] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c7a7d0 00:24:40.606 [2024-07-25 10:39:44.202752] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:40.606 [2024-07-25 10:39:44.202958] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c7a220 00:24:40.606 [2024-07-25 10:39:44.203147] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c7a7d0 00:24:40.606 [2024-07-25 10:39:44.203164] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c7a7d0 00:24:40.606 [2024-07-25 10:39:44.203283] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.864 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:40.864 "name": "raid_bdev1", 00:24:40.864 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:40.864 "strip_size_kb": 0, 00:24:40.864 "state": "online", 00:24:40.864 "raid_level": "raid1", 00:24:40.864 "superblock": true, 00:24:40.864 "num_base_bdevs": 4, 00:24:40.864 "num_base_bdevs_discovered": 3, 00:24:40.864 "num_base_bdevs_operational": 3, 00:24:40.864 "base_bdevs_list": [ 00:24:40.864 { 00:24:40.864 "name": "spare", 00:24:40.864 "uuid": "fd3b38ec-d7c4-5a02-80e3-c949a5e4cf8c", 00:24:40.864 "is_configured": true, 00:24:40.864 "data_offset": 2048, 00:24:40.864 "data_size": 63488 00:24:40.864 }, 00:24:40.864 { 00:24:40.864 "name": null, 00:24:40.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.864 "is_configured": false, 00:24:40.864 "data_offset": 2048, 00:24:40.864 "data_size": 63488 00:24:40.864 }, 00:24:40.864 { 00:24:40.864 "name": "BaseBdev3", 00:24:40.864 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:40.864 "is_configured": true, 00:24:40.864 "data_offset": 2048, 00:24:40.864 "data_size": 63488 00:24:40.864 }, 00:24:40.864 { 00:24:40.864 "name": "BaseBdev4", 00:24:40.864 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:40.864 "is_configured": true, 00:24:40.864 "data_offset": 2048, 00:24:40.864 "data_size": 63488 00:24:40.864 } 00:24:40.864 ] 00:24:40.864 }' 00:24:40.864 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:40.864 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.428 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:41.428 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:41.428 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:41.428 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:41.428 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:41.428 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.428 10:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.686 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:41.686 "name": "raid_bdev1", 00:24:41.686 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:41.686 "strip_size_kb": 0, 00:24:41.686 "state": "online", 00:24:41.686 "raid_level": "raid1", 00:24:41.686 "superblock": true, 00:24:41.686 "num_base_bdevs": 4, 00:24:41.686 "num_base_bdevs_discovered": 3, 00:24:41.686 "num_base_bdevs_operational": 3, 00:24:41.686 "base_bdevs_list": [ 00:24:41.686 { 00:24:41.686 "name": "spare", 00:24:41.686 "uuid": "fd3b38ec-d7c4-5a02-80e3-c949a5e4cf8c", 00:24:41.686 "is_configured": true, 00:24:41.686 "data_offset": 2048, 00:24:41.686 "data_size": 63488 00:24:41.686 }, 00:24:41.686 { 00:24:41.686 "name": null, 00:24:41.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.686 "is_configured": false, 00:24:41.686 "data_offset": 2048, 00:24:41.686 "data_size": 63488 00:24:41.687 }, 00:24:41.687 { 00:24:41.687 "name": "BaseBdev3", 00:24:41.687 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:41.687 "is_configured": true, 00:24:41.687 "data_offset": 2048, 00:24:41.687 "data_size": 63488 00:24:41.687 }, 00:24:41.687 { 00:24:41.687 "name": "BaseBdev4", 00:24:41.687 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:41.687 "is_configured": true, 00:24:41.687 "data_offset": 2048, 00:24:41.687 "data_size": 63488 00:24:41.687 } 00:24:41.687 ] 00:24:41.687 }' 00:24:41.687 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:41.687 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:41.687 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:41.687 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:41.687 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.687 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:41.944 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.944 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:42.202 [2024-07-25 10:39:45.744738] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:42.202 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:42.202 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:42.202 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:42.202 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:42.202 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:42.202 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:42.202 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:42.202 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:42.202 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:42.202 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:42.202 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.202 10:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.459 10:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:42.459 "name": "raid_bdev1", 00:24:42.459 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:42.459 "strip_size_kb": 0, 00:24:42.459 "state": "online", 00:24:42.459 "raid_level": "raid1", 00:24:42.459 "superblock": true, 00:24:42.459 "num_base_bdevs": 4, 00:24:42.459 "num_base_bdevs_discovered": 2, 00:24:42.459 "num_base_bdevs_operational": 2, 00:24:42.459 "base_bdevs_list": [ 00:24:42.459 { 00:24:42.459 "name": null, 00:24:42.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.459 "is_configured": false, 00:24:42.459 "data_offset": 2048, 00:24:42.459 "data_size": 63488 00:24:42.459 }, 00:24:42.459 { 00:24:42.459 "name": null, 00:24:42.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.459 "is_configured": false, 00:24:42.459 "data_offset": 2048, 00:24:42.459 "data_size": 63488 00:24:42.459 }, 00:24:42.459 { 00:24:42.459 "name": "BaseBdev3", 00:24:42.459 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:42.459 "is_configured": true, 00:24:42.459 "data_offset": 2048, 00:24:42.459 "data_size": 63488 00:24:42.459 }, 00:24:42.459 { 00:24:42.459 "name": "BaseBdev4", 00:24:42.459 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:42.459 "is_configured": true, 00:24:42.459 "data_offset": 2048, 00:24:42.459 "data_size": 63488 00:24:42.459 } 00:24:42.459 ] 00:24:42.459 }' 00:24:42.459 10:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:42.459 10:39:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:43.025 10:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:43.283 [2024-07-25 10:39:46.771616] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:43.283 [2024-07-25 10:39:46.771843] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:43.283 [2024-07-25 10:39:46.771871] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:43.283 [2024-07-25 10:39:46.771915] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:43.283 [2024-07-25 10:39:46.777604] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c7a220 00:24:43.283 [2024-07-25 10:39:46.779810] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:43.283 10:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:24:44.215 10:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.215 10:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:44.215 10:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:44.215 10:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:44.215 10:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:44.215 10:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.215 10:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.473 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:44.473 "name": "raid_bdev1", 00:24:44.473 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:44.473 "strip_size_kb": 0, 00:24:44.473 "state": "online", 00:24:44.473 "raid_level": "raid1", 00:24:44.473 "superblock": true, 00:24:44.473 "num_base_bdevs": 4, 00:24:44.473 "num_base_bdevs_discovered": 3, 00:24:44.473 "num_base_bdevs_operational": 3, 00:24:44.473 "process": { 00:24:44.473 "type": "rebuild", 00:24:44.473 "target": "spare", 00:24:44.473 "progress": { 00:24:44.473 "blocks": 24576, 00:24:44.473 "percent": 38 00:24:44.473 } 00:24:44.473 }, 00:24:44.473 "base_bdevs_list": [ 00:24:44.473 { 00:24:44.473 "name": "spare", 00:24:44.473 "uuid": "fd3b38ec-d7c4-5a02-80e3-c949a5e4cf8c", 00:24:44.473 "is_configured": true, 00:24:44.473 "data_offset": 2048, 00:24:44.473 "data_size": 63488 00:24:44.473 }, 00:24:44.473 { 00:24:44.473 "name": null, 00:24:44.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.473 "is_configured": false, 00:24:44.473 "data_offset": 2048, 00:24:44.473 "data_size": 63488 00:24:44.473 }, 00:24:44.473 { 00:24:44.473 "name": "BaseBdev3", 00:24:44.473 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:44.473 "is_configured": true, 00:24:44.473 "data_offset": 2048, 00:24:44.473 "data_size": 63488 00:24:44.473 }, 00:24:44.473 { 00:24:44.473 "name": "BaseBdev4", 00:24:44.473 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:44.473 "is_configured": true, 00:24:44.473 "data_offset": 2048, 00:24:44.473 "data_size": 63488 00:24:44.473 } 00:24:44.473 ] 00:24:44.473 }' 00:24:44.473 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:44.473 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:44.473 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:44.473 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:44.473 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:44.731 [2024-07-25 10:39:48.399449] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:44.989 [2024-07-25 10:39:48.494296] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:44.989 [2024-07-25 10:39:48.494354] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.989 [2024-07-25 10:39:48.494376] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:44.989 [2024-07-25 10:39:48.494387] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:44.989 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:44.989 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:44.989 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:44.989 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:44.989 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:44.989 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:44.989 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:44.989 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:44.989 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:44.989 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:44.989 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.989 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.247 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:45.247 "name": "raid_bdev1", 00:24:45.247 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:45.247 "strip_size_kb": 0, 00:24:45.247 "state": "online", 00:24:45.247 "raid_level": "raid1", 00:24:45.247 "superblock": true, 00:24:45.247 "num_base_bdevs": 4, 00:24:45.247 "num_base_bdevs_discovered": 2, 00:24:45.247 "num_base_bdevs_operational": 2, 00:24:45.247 "base_bdevs_list": [ 00:24:45.247 { 00:24:45.247 "name": null, 00:24:45.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.247 "is_configured": false, 00:24:45.247 "data_offset": 2048, 00:24:45.247 "data_size": 63488 00:24:45.247 }, 00:24:45.247 { 00:24:45.247 "name": null, 00:24:45.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.247 "is_configured": false, 00:24:45.247 "data_offset": 2048, 00:24:45.247 "data_size": 63488 00:24:45.247 }, 00:24:45.247 { 00:24:45.247 "name": "BaseBdev3", 00:24:45.247 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:45.247 "is_configured": true, 00:24:45.247 "data_offset": 2048, 00:24:45.247 "data_size": 63488 00:24:45.247 }, 00:24:45.247 { 00:24:45.247 "name": "BaseBdev4", 00:24:45.247 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:45.247 "is_configured": true, 00:24:45.247 "data_offset": 2048, 00:24:45.247 "data_size": 63488 00:24:45.247 } 00:24:45.247 ] 00:24:45.247 }' 00:24:45.247 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:45.247 10:39:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.811 10:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:46.069 [2024-07-25 10:39:49.535334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:46.069 [2024-07-25 10:39:49.535403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:46.069 [2024-07-25 10:39:49.535432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c79800 00:24:46.069 [2024-07-25 10:39:49.535448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:46.069 [2024-07-25 10:39:49.535912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:46.069 [2024-07-25 10:39:49.535939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:46.069 [2024-07-25 10:39:49.536046] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:46.069 [2024-07-25 10:39:49.536066] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:46.069 [2024-07-25 10:39:49.536077] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:46.069 [2024-07-25 10:39:49.536119] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:46.069 [2024-07-25 10:39:49.542096] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c943f0 00:24:46.069 spare 00:24:46.069 [2024-07-25 10:39:49.543681] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:46.069 10:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:24:47.002 10:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:47.002 10:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:47.002 10:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:47.002 10:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:47.002 10:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:47.002 10:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.002 10:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.260 10:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:47.260 "name": "raid_bdev1", 00:24:47.260 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:47.260 "strip_size_kb": 0, 00:24:47.260 "state": "online", 00:24:47.260 "raid_level": "raid1", 00:24:47.260 "superblock": true, 00:24:47.260 "num_base_bdevs": 4, 00:24:47.260 "num_base_bdevs_discovered": 3, 00:24:47.260 "num_base_bdevs_operational": 3, 00:24:47.260 "process": { 00:24:47.260 "type": "rebuild", 00:24:47.260 "target": "spare", 00:24:47.260 "progress": { 00:24:47.260 "blocks": 24576, 00:24:47.260 "percent": 38 00:24:47.260 } 00:24:47.260 }, 00:24:47.260 "base_bdevs_list": [ 00:24:47.260 { 00:24:47.260 "name": "spare", 00:24:47.260 "uuid": "fd3b38ec-d7c4-5a02-80e3-c949a5e4cf8c", 00:24:47.260 "is_configured": true, 00:24:47.260 "data_offset": 2048, 00:24:47.260 "data_size": 63488 00:24:47.260 }, 00:24:47.260 { 00:24:47.260 "name": null, 00:24:47.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.260 "is_configured": false, 00:24:47.260 "data_offset": 2048, 00:24:47.260 "data_size": 63488 00:24:47.260 }, 00:24:47.260 { 00:24:47.260 "name": "BaseBdev3", 00:24:47.260 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:47.260 "is_configured": true, 00:24:47.260 "data_offset": 2048, 00:24:47.260 "data_size": 63488 00:24:47.260 }, 00:24:47.260 { 00:24:47.260 "name": "BaseBdev4", 00:24:47.260 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:47.260 "is_configured": true, 00:24:47.260 "data_offset": 2048, 00:24:47.260 "data_size": 63488 00:24:47.260 } 00:24:47.260 ] 00:24:47.260 }' 00:24:47.260 10:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:47.260 10:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:47.260 10:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:47.260 10:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:47.260 10:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:47.518 [2024-07-25 10:39:51.166975] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:47.776 [2024-07-25 10:39:51.258233] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:47.776 [2024-07-25 10:39:51.258291] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.776 [2024-07-25 10:39:51.258313] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:47.776 [2024-07-25 10:39:51.258324] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:47.776 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:47.776 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:47.776 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:47.776 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:47.776 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:47.776 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:47.776 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:47.776 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:47.776 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:47.776 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:47.776 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.776 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.033 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.033 "name": "raid_bdev1", 00:24:48.033 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:48.033 "strip_size_kb": 0, 00:24:48.033 "state": "online", 00:24:48.033 "raid_level": "raid1", 00:24:48.033 "superblock": true, 00:24:48.033 "num_base_bdevs": 4, 00:24:48.033 "num_base_bdevs_discovered": 2, 00:24:48.033 "num_base_bdevs_operational": 2, 00:24:48.033 "base_bdevs_list": [ 00:24:48.033 { 00:24:48.033 "name": null, 00:24:48.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.033 "is_configured": false, 00:24:48.033 "data_offset": 2048, 00:24:48.033 "data_size": 63488 00:24:48.033 }, 00:24:48.033 { 00:24:48.033 "name": null, 00:24:48.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.033 "is_configured": false, 00:24:48.033 "data_offset": 2048, 00:24:48.033 "data_size": 63488 00:24:48.033 }, 00:24:48.033 { 00:24:48.033 "name": "BaseBdev3", 00:24:48.034 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:48.034 "is_configured": true, 00:24:48.034 "data_offset": 2048, 00:24:48.034 "data_size": 63488 00:24:48.034 }, 00:24:48.034 { 00:24:48.034 "name": "BaseBdev4", 00:24:48.034 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:48.034 "is_configured": true, 00:24:48.034 "data_offset": 2048, 00:24:48.034 "data_size": 63488 00:24:48.034 } 00:24:48.034 ] 00:24:48.034 }' 00:24:48.034 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.034 10:39:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:48.629 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:48.629 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:48.629 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:48.629 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:48.629 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:48.629 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.629 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.886 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:48.886 "name": "raid_bdev1", 00:24:48.886 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:48.886 "strip_size_kb": 0, 00:24:48.886 "state": "online", 00:24:48.886 "raid_level": "raid1", 00:24:48.886 "superblock": true, 00:24:48.886 "num_base_bdevs": 4, 00:24:48.886 "num_base_bdevs_discovered": 2, 00:24:48.887 "num_base_bdevs_operational": 2, 00:24:48.887 "base_bdevs_list": [ 00:24:48.887 { 00:24:48.887 "name": null, 00:24:48.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.887 "is_configured": false, 00:24:48.887 "data_offset": 2048, 00:24:48.887 "data_size": 63488 00:24:48.887 }, 00:24:48.887 { 00:24:48.887 "name": null, 00:24:48.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.887 "is_configured": false, 00:24:48.887 "data_offset": 2048, 00:24:48.887 "data_size": 63488 00:24:48.887 }, 00:24:48.887 { 00:24:48.887 "name": "BaseBdev3", 00:24:48.887 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:48.887 "is_configured": true, 00:24:48.887 "data_offset": 2048, 00:24:48.887 "data_size": 63488 00:24:48.887 }, 00:24:48.887 { 00:24:48.887 "name": "BaseBdev4", 00:24:48.887 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:48.887 "is_configured": true, 00:24:48.887 "data_offset": 2048, 00:24:48.887 "data_size": 63488 00:24:48.887 } 00:24:48.887 ] 00:24:48.887 }' 00:24:48.887 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:48.887 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:48.887 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:48.887 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:48.887 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:49.145 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:49.402 [2024-07-25 10:39:52.928881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:49.402 [2024-07-25 10:39:52.928947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.402 [2024-07-25 10:39:52.928976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c7d660 00:24:49.402 [2024-07-25 10:39:52.928991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.402 [2024-07-25 10:39:52.929422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.402 [2024-07-25 10:39:52.929447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:49.402 [2024-07-25 10:39:52.929537] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:49.402 [2024-07-25 10:39:52.929556] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:49.402 [2024-07-25 10:39:52.929566] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:49.402 BaseBdev1 00:24:49.402 10:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:24:50.335 10:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:50.335 10:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:50.335 10:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:50.335 10:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:50.335 10:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:50.335 10:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:50.335 10:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:50.335 10:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:50.335 10:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:50.335 10:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:50.335 10:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.335 10:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.593 10:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:50.593 "name": "raid_bdev1", 00:24:50.593 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:50.593 "strip_size_kb": 0, 00:24:50.593 "state": "online", 00:24:50.593 "raid_level": "raid1", 00:24:50.593 "superblock": true, 00:24:50.593 "num_base_bdevs": 4, 00:24:50.593 "num_base_bdevs_discovered": 2, 00:24:50.593 "num_base_bdevs_operational": 2, 00:24:50.593 "base_bdevs_list": [ 00:24:50.593 { 00:24:50.593 "name": null, 00:24:50.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.593 "is_configured": false, 00:24:50.593 "data_offset": 2048, 00:24:50.593 "data_size": 63488 00:24:50.593 }, 00:24:50.593 { 00:24:50.593 "name": null, 00:24:50.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.593 "is_configured": false, 00:24:50.593 "data_offset": 2048, 00:24:50.593 "data_size": 63488 00:24:50.593 }, 00:24:50.593 { 00:24:50.593 "name": "BaseBdev3", 00:24:50.593 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:50.593 "is_configured": true, 00:24:50.593 "data_offset": 2048, 00:24:50.593 "data_size": 63488 00:24:50.593 }, 00:24:50.593 { 00:24:50.593 "name": "BaseBdev4", 00:24:50.593 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:50.593 "is_configured": true, 00:24:50.593 "data_offset": 2048, 00:24:50.593 "data_size": 63488 00:24:50.593 } 00:24:50.593 ] 00:24:50.593 }' 00:24:50.593 10:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:50.593 10:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.157 10:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:51.157 10:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:51.157 10:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:51.157 10:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:51.157 10:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:51.157 10:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.157 10:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.415 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:51.415 "name": "raid_bdev1", 00:24:51.415 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:51.415 "strip_size_kb": 0, 00:24:51.415 "state": "online", 00:24:51.415 "raid_level": "raid1", 00:24:51.415 "superblock": true, 00:24:51.415 "num_base_bdevs": 4, 00:24:51.415 "num_base_bdevs_discovered": 2, 00:24:51.415 "num_base_bdevs_operational": 2, 00:24:51.415 "base_bdevs_list": [ 00:24:51.415 { 00:24:51.415 "name": null, 00:24:51.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.415 "is_configured": false, 00:24:51.415 "data_offset": 2048, 00:24:51.415 "data_size": 63488 00:24:51.415 }, 00:24:51.415 { 00:24:51.415 "name": null, 00:24:51.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.415 "is_configured": false, 00:24:51.415 "data_offset": 2048, 00:24:51.415 "data_size": 63488 00:24:51.415 }, 00:24:51.415 { 00:24:51.415 "name": "BaseBdev3", 00:24:51.415 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:51.415 "is_configured": true, 00:24:51.415 "data_offset": 2048, 00:24:51.415 "data_size": 63488 00:24:51.415 }, 00:24:51.415 { 00:24:51.415 "name": "BaseBdev4", 00:24:51.415 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:51.415 "is_configured": true, 00:24:51.415 "data_offset": 2048, 00:24:51.415 "data_size": 63488 00:24:51.415 } 00:24:51.415 ] 00:24:51.415 }' 00:24:51.415 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:51.415 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:51.415 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:51.672 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:51.672 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:51.672 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:24:51.672 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:51.672 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:51.672 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:51.672 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:51.672 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:51.672 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:51.672 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:51.672 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:51.672 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:24:51.672 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:51.929 [2024-07-25 10:39:55.391736] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:51.929 [2024-07-25 10:39:55.391946] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:51.929 [2024-07-25 10:39:55.391976] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:51.929 request: 00:24:51.929 { 00:24:51.929 "base_bdev": "BaseBdev1", 00:24:51.930 "raid_bdev": "raid_bdev1", 00:24:51.930 "method": "bdev_raid_add_base_bdev", 00:24:51.930 "req_id": 1 00:24:51.930 } 00:24:51.930 Got JSON-RPC error response 00:24:51.930 response: 00:24:51.930 { 00:24:51.930 "code": -22, 00:24:51.930 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:51.930 } 00:24:51.930 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:24:51.930 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:51.930 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:51.930 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:51.930 10:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:24:52.863 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:52.863 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:52.863 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:52.863 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:52.863 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:52.863 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:52.863 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:52.863 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:52.863 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:52.863 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:52.863 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.863 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.121 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:53.121 "name": "raid_bdev1", 00:24:53.121 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:53.121 "strip_size_kb": 0, 00:24:53.121 "state": "online", 00:24:53.121 "raid_level": "raid1", 00:24:53.121 "superblock": true, 00:24:53.121 "num_base_bdevs": 4, 00:24:53.121 "num_base_bdevs_discovered": 2, 00:24:53.121 "num_base_bdevs_operational": 2, 00:24:53.121 "base_bdevs_list": [ 00:24:53.121 { 00:24:53.121 "name": null, 00:24:53.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.122 "is_configured": false, 00:24:53.122 "data_offset": 2048, 00:24:53.122 "data_size": 63488 00:24:53.122 }, 00:24:53.122 { 00:24:53.122 "name": null, 00:24:53.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.122 "is_configured": false, 00:24:53.122 "data_offset": 2048, 00:24:53.122 "data_size": 63488 00:24:53.122 }, 00:24:53.122 { 00:24:53.122 "name": "BaseBdev3", 00:24:53.122 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:53.122 "is_configured": true, 00:24:53.122 "data_offset": 2048, 00:24:53.122 "data_size": 63488 00:24:53.122 }, 00:24:53.122 { 00:24:53.122 "name": "BaseBdev4", 00:24:53.122 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:53.122 "is_configured": true, 00:24:53.122 "data_offset": 2048, 00:24:53.122 "data_size": 63488 00:24:53.122 } 00:24:53.122 ] 00:24:53.122 }' 00:24:53.122 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:53.122 10:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.688 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:53.688 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:53.688 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:53.688 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:53.688 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:53.688 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.688 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.946 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:53.946 "name": "raid_bdev1", 00:24:53.946 "uuid": "88ff836f-1697-4855-af05-4aac88e808c1", 00:24:53.946 "strip_size_kb": 0, 00:24:53.946 "state": "online", 00:24:53.946 "raid_level": "raid1", 00:24:53.946 "superblock": true, 00:24:53.946 "num_base_bdevs": 4, 00:24:53.946 "num_base_bdevs_discovered": 2, 00:24:53.946 "num_base_bdevs_operational": 2, 00:24:53.946 "base_bdevs_list": [ 00:24:53.946 { 00:24:53.946 "name": null, 00:24:53.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.946 "is_configured": false, 00:24:53.946 "data_offset": 2048, 00:24:53.946 "data_size": 63488 00:24:53.946 }, 00:24:53.946 { 00:24:53.946 "name": null, 00:24:53.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.946 "is_configured": false, 00:24:53.946 "data_offset": 2048, 00:24:53.946 "data_size": 63488 00:24:53.946 }, 00:24:53.946 { 00:24:53.946 "name": "BaseBdev3", 00:24:53.946 "uuid": "5f2215d1-2a54-57d8-a728-e0ce150d7a24", 00:24:53.946 "is_configured": true, 00:24:53.946 "data_offset": 2048, 00:24:53.946 "data_size": 63488 00:24:53.946 }, 00:24:53.946 { 00:24:53.946 "name": "BaseBdev4", 00:24:53.946 "uuid": "f39a499a-962b-5343-8a3d-cf6246f654b5", 00:24:53.946 "is_configured": true, 00:24:53.946 "data_offset": 2048, 00:24:53.946 "data_size": 63488 00:24:53.946 } 00:24:53.946 ] 00:24:53.946 }' 00:24:53.946 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:53.946 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:53.946 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:53.946 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:53.946 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2456044 00:24:53.946 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 2456044 ']' 00:24:53.946 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 2456044 00:24:53.946 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:24:53.946 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:53.946 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2456044 00:24:54.205 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:54.205 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:54.205 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2456044' 00:24:54.205 killing process with pid 2456044 00:24:54.205 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 2456044 00:24:54.205 Received shutdown signal, test time was about 27.094165 seconds 00:24:54.205 00:24:54.205 Latency(us) 00:24:54.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.205 =================================================================================================================== 00:24:54.205 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.205 [2024-07-25 10:39:57.660416] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:54.205 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 2456044 00:24:54.205 [2024-07-25 10:39:57.660568] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:54.205 [2024-07-25 10:39:57.660659] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:54.205 [2024-07-25 10:39:57.660675] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c7a7d0 name raid_bdev1, state offline 00:24:54.205 [2024-07-25 10:39:57.716878] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:54.463 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:24:54.463 00:24:54.463 real 0m33.008s 00:24:54.463 user 0m52.656s 00:24:54.463 sys 0m4.068s 00:24:54.463 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:54.463 10:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.463 ************************************ 00:24:54.463 END TEST raid_rebuild_test_sb_io 00:24:54.463 ************************************ 00:24:54.463 10:39:58 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:24:54.463 10:39:58 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:24:54.463 10:39:58 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:24:54.463 10:39:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:54.463 10:39:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:54.463 10:39:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:54.464 ************************************ 00:24:54.464 START TEST raid_state_function_test_sb_4k 00:24:54.464 ************************************ 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=2460361 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2460361' 00:24:54.464 Process raid pid: 2460361 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 2460361 /var/tmp/spdk-raid.sock 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 2460361 ']' 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:54.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:54.464 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.464 [2024-07-25 10:39:58.094533] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:24:54.464 [2024-07-25 10:39:58.094598] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.722 [2024-07-25 10:39:58.178511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.722 [2024-07-25 10:39:58.301605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.722 [2024-07-25 10:39:58.374549] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:54.722 [2024-07-25 10:39:58.374587] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:54.722 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:54.722 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:24:54.722 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:54.980 [2024-07-25 10:39:58.651311] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:54.980 [2024-07-25 10:39:58.651365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:54.980 [2024-07-25 10:39:58.651377] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:54.980 [2024-07-25 10:39:58.651390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:54.980 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:54.980 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:54.980 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:54.980 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:54.980 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:54.980 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:54.980 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:54.980 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:54.980 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:54.980 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:54.980 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.980 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.238 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:55.238 "name": "Existed_Raid", 00:24:55.238 "uuid": "c88a8902-11e4-41ec-a0de-07412f9200f2", 00:24:55.238 "strip_size_kb": 0, 00:24:55.238 "state": "configuring", 00:24:55.238 "raid_level": "raid1", 00:24:55.238 "superblock": true, 00:24:55.238 "num_base_bdevs": 2, 00:24:55.238 "num_base_bdevs_discovered": 0, 00:24:55.238 "num_base_bdevs_operational": 2, 00:24:55.238 "base_bdevs_list": [ 00:24:55.238 { 00:24:55.238 "name": "BaseBdev1", 00:24:55.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.238 "is_configured": false, 00:24:55.238 "data_offset": 0, 00:24:55.238 "data_size": 0 00:24:55.238 }, 00:24:55.238 { 00:24:55.238 "name": "BaseBdev2", 00:24:55.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.238 "is_configured": false, 00:24:55.238 "data_offset": 0, 00:24:55.238 "data_size": 0 00:24:55.238 } 00:24:55.238 ] 00:24:55.238 }' 00:24:55.238 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:55.238 10:39:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.803 10:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:56.061 [2024-07-25 10:39:59.693985] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:56.061 [2024-07-25 10:39:59.694026] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ad9600 name Existed_Raid, state configuring 00:24:56.061 10:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:56.319 [2024-07-25 10:39:59.938656] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:56.319 [2024-07-25 10:39:59.938703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:56.319 [2024-07-25 10:39:59.938716] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:56.319 [2024-07-25 10:39:59.938729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:56.319 10:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:24:56.576 [2024-07-25 10:40:00.235046] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:56.576 BaseBdev1 00:24:56.576 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:56.576 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:56.576 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:56.576 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:24:56.576 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:56.576 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:56.576 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:56.833 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:57.090 [ 00:24:57.090 { 00:24:57.090 "name": "BaseBdev1", 00:24:57.090 "aliases": [ 00:24:57.090 "5f60aeec-86e9-49a7-b516-ad1a8ccd0a0a" 00:24:57.090 ], 00:24:57.090 "product_name": "Malloc disk", 00:24:57.090 "block_size": 4096, 00:24:57.090 "num_blocks": 8192, 00:24:57.090 "uuid": "5f60aeec-86e9-49a7-b516-ad1a8ccd0a0a", 00:24:57.090 "assigned_rate_limits": { 00:24:57.090 "rw_ios_per_sec": 0, 00:24:57.090 "rw_mbytes_per_sec": 0, 00:24:57.090 "r_mbytes_per_sec": 0, 00:24:57.090 "w_mbytes_per_sec": 0 00:24:57.090 }, 00:24:57.090 "claimed": true, 00:24:57.090 "claim_type": "exclusive_write", 00:24:57.090 "zoned": false, 00:24:57.090 "supported_io_types": { 00:24:57.090 "read": true, 00:24:57.090 "write": true, 00:24:57.090 "unmap": true, 00:24:57.090 "flush": true, 00:24:57.090 "reset": true, 00:24:57.090 "nvme_admin": false, 00:24:57.090 "nvme_io": false, 00:24:57.090 "nvme_io_md": false, 00:24:57.090 "write_zeroes": true, 00:24:57.090 "zcopy": true, 00:24:57.090 "get_zone_info": false, 00:24:57.090 "zone_management": false, 00:24:57.090 "zone_append": false, 00:24:57.090 "compare": false, 00:24:57.090 "compare_and_write": false, 00:24:57.090 "abort": true, 00:24:57.090 "seek_hole": false, 00:24:57.090 "seek_data": false, 00:24:57.090 "copy": true, 00:24:57.090 "nvme_iov_md": false 00:24:57.090 }, 00:24:57.090 "memory_domains": [ 00:24:57.090 { 00:24:57.090 "dma_device_id": "system", 00:24:57.090 "dma_device_type": 1 00:24:57.090 }, 00:24:57.090 { 00:24:57.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.090 "dma_device_type": 2 00:24:57.090 } 00:24:57.090 ], 00:24:57.090 "driver_specific": {} 00:24:57.090 } 00:24:57.090 ] 00:24:57.090 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:24:57.090 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:57.090 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:57.090 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:57.090 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:57.090 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:57.090 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:57.090 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:57.090 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:57.090 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:57.090 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:57.090 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.090 10:40:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.355 10:40:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:57.355 "name": "Existed_Raid", 00:24:57.355 "uuid": "06cd9f73-a1e0-4a0e-a682-0d5036f6f82f", 00:24:57.355 "strip_size_kb": 0, 00:24:57.355 "state": "configuring", 00:24:57.355 "raid_level": "raid1", 00:24:57.355 "superblock": true, 00:24:57.355 "num_base_bdevs": 2, 00:24:57.355 "num_base_bdevs_discovered": 1, 00:24:57.355 "num_base_bdevs_operational": 2, 00:24:57.355 "base_bdevs_list": [ 00:24:57.355 { 00:24:57.355 "name": "BaseBdev1", 00:24:57.355 "uuid": "5f60aeec-86e9-49a7-b516-ad1a8ccd0a0a", 00:24:57.355 "is_configured": true, 00:24:57.355 "data_offset": 256, 00:24:57.355 "data_size": 7936 00:24:57.355 }, 00:24:57.355 { 00:24:57.355 "name": "BaseBdev2", 00:24:57.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.355 "is_configured": false, 00:24:57.355 "data_offset": 0, 00:24:57.355 "data_size": 0 00:24:57.355 } 00:24:57.355 ] 00:24:57.355 }' 00:24:57.355 10:40:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:57.355 10:40:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:57.927 10:40:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:58.184 [2024-07-25 10:40:01.775134] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:58.184 [2024-07-25 10:40:01.775182] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ad8e50 name Existed_Raid, state configuring 00:24:58.184 10:40:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:58.442 [2024-07-25 10:40:02.011772] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:58.442 [2024-07-25 10:40:02.013038] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:58.442 [2024-07-25 10:40:02.013066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.442 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:58.700 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:58.700 "name": "Existed_Raid", 00:24:58.700 "uuid": "82a4960e-21bb-47f9-b628-e5f5831b1b26", 00:24:58.700 "strip_size_kb": 0, 00:24:58.700 "state": "configuring", 00:24:58.700 "raid_level": "raid1", 00:24:58.700 "superblock": true, 00:24:58.700 "num_base_bdevs": 2, 00:24:58.700 "num_base_bdevs_discovered": 1, 00:24:58.700 "num_base_bdevs_operational": 2, 00:24:58.700 "base_bdevs_list": [ 00:24:58.700 { 00:24:58.700 "name": "BaseBdev1", 00:24:58.700 "uuid": "5f60aeec-86e9-49a7-b516-ad1a8ccd0a0a", 00:24:58.700 "is_configured": true, 00:24:58.700 "data_offset": 256, 00:24:58.700 "data_size": 7936 00:24:58.700 }, 00:24:58.700 { 00:24:58.700 "name": "BaseBdev2", 00:24:58.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.700 "is_configured": false, 00:24:58.700 "data_offset": 0, 00:24:58.700 "data_size": 0 00:24:58.700 } 00:24:58.700 ] 00:24:58.700 }' 00:24:58.700 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:58.700 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:59.264 10:40:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:24:59.522 [2024-07-25 10:40:03.030985] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:59.522 [2024-07-25 10:40:03.031206] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ad9b40 00:24:59.522 [2024-07-25 10:40:03.031222] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:59.522 [2024-07-25 10:40:03.031362] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ada8e0 00:24:59.522 [2024-07-25 10:40:03.031488] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ad9b40 00:24:59.522 [2024-07-25 10:40:03.031501] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1ad9b40 00:24:59.522 [2024-07-25 10:40:03.031590] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:59.522 BaseBdev2 00:24:59.522 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:59.522 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:59.523 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:59.523 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:24:59.523 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:59.523 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:59.523 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:59.780 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:00.038 [ 00:25:00.038 { 00:25:00.038 "name": "BaseBdev2", 00:25:00.038 "aliases": [ 00:25:00.038 "611c73ab-dd51-40e8-9595-009f286f010f" 00:25:00.038 ], 00:25:00.038 "product_name": "Malloc disk", 00:25:00.038 "block_size": 4096, 00:25:00.038 "num_blocks": 8192, 00:25:00.038 "uuid": "611c73ab-dd51-40e8-9595-009f286f010f", 00:25:00.038 "assigned_rate_limits": { 00:25:00.038 "rw_ios_per_sec": 0, 00:25:00.038 "rw_mbytes_per_sec": 0, 00:25:00.038 "r_mbytes_per_sec": 0, 00:25:00.038 "w_mbytes_per_sec": 0 00:25:00.038 }, 00:25:00.038 "claimed": true, 00:25:00.038 "claim_type": "exclusive_write", 00:25:00.038 "zoned": false, 00:25:00.038 "supported_io_types": { 00:25:00.038 "read": true, 00:25:00.038 "write": true, 00:25:00.038 "unmap": true, 00:25:00.038 "flush": true, 00:25:00.038 "reset": true, 00:25:00.038 "nvme_admin": false, 00:25:00.038 "nvme_io": false, 00:25:00.038 "nvme_io_md": false, 00:25:00.038 "write_zeroes": true, 00:25:00.038 "zcopy": true, 00:25:00.038 "get_zone_info": false, 00:25:00.038 "zone_management": false, 00:25:00.038 "zone_append": false, 00:25:00.038 "compare": false, 00:25:00.038 "compare_and_write": false, 00:25:00.038 "abort": true, 00:25:00.038 "seek_hole": false, 00:25:00.038 "seek_data": false, 00:25:00.038 "copy": true, 00:25:00.038 "nvme_iov_md": false 00:25:00.038 }, 00:25:00.038 "memory_domains": [ 00:25:00.038 { 00:25:00.038 "dma_device_id": "system", 00:25:00.038 "dma_device_type": 1 00:25:00.038 }, 00:25:00.038 { 00:25:00.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.038 "dma_device_type": 2 00:25:00.038 } 00:25:00.038 ], 00:25:00.038 "driver_specific": {} 00:25:00.038 } 00:25:00.038 ] 00:25:00.038 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:25:00.038 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:00.038 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:00.038 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:00.039 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:00.039 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:00.039 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:00.039 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:00.039 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:00.039 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:00.039 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:00.039 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:00.039 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:00.039 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.039 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:00.297 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:00.297 "name": "Existed_Raid", 00:25:00.297 "uuid": "82a4960e-21bb-47f9-b628-e5f5831b1b26", 00:25:00.297 "strip_size_kb": 0, 00:25:00.297 "state": "online", 00:25:00.297 "raid_level": "raid1", 00:25:00.297 "superblock": true, 00:25:00.297 "num_base_bdevs": 2, 00:25:00.297 "num_base_bdevs_discovered": 2, 00:25:00.297 "num_base_bdevs_operational": 2, 00:25:00.297 "base_bdevs_list": [ 00:25:00.297 { 00:25:00.297 "name": "BaseBdev1", 00:25:00.297 "uuid": "5f60aeec-86e9-49a7-b516-ad1a8ccd0a0a", 00:25:00.297 "is_configured": true, 00:25:00.297 "data_offset": 256, 00:25:00.297 "data_size": 7936 00:25:00.297 }, 00:25:00.297 { 00:25:00.297 "name": "BaseBdev2", 00:25:00.297 "uuid": "611c73ab-dd51-40e8-9595-009f286f010f", 00:25:00.297 "is_configured": true, 00:25:00.297 "data_offset": 256, 00:25:00.297 "data_size": 7936 00:25:00.297 } 00:25:00.297 ] 00:25:00.297 }' 00:25:00.297 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:00.297 10:40:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.861 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:00.861 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:00.861 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:00.861 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:00.861 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:00.861 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:25:00.861 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:00.861 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:00.861 [2024-07-25 10:40:04.551251] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:01.118 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:01.118 "name": "Existed_Raid", 00:25:01.118 "aliases": [ 00:25:01.118 "82a4960e-21bb-47f9-b628-e5f5831b1b26" 00:25:01.118 ], 00:25:01.118 "product_name": "Raid Volume", 00:25:01.118 "block_size": 4096, 00:25:01.118 "num_blocks": 7936, 00:25:01.118 "uuid": "82a4960e-21bb-47f9-b628-e5f5831b1b26", 00:25:01.118 "assigned_rate_limits": { 00:25:01.118 "rw_ios_per_sec": 0, 00:25:01.118 "rw_mbytes_per_sec": 0, 00:25:01.118 "r_mbytes_per_sec": 0, 00:25:01.118 "w_mbytes_per_sec": 0 00:25:01.118 }, 00:25:01.118 "claimed": false, 00:25:01.118 "zoned": false, 00:25:01.118 "supported_io_types": { 00:25:01.118 "read": true, 00:25:01.118 "write": true, 00:25:01.118 "unmap": false, 00:25:01.118 "flush": false, 00:25:01.118 "reset": true, 00:25:01.118 "nvme_admin": false, 00:25:01.118 "nvme_io": false, 00:25:01.118 "nvme_io_md": false, 00:25:01.118 "write_zeroes": true, 00:25:01.118 "zcopy": false, 00:25:01.118 "get_zone_info": false, 00:25:01.118 "zone_management": false, 00:25:01.118 "zone_append": false, 00:25:01.118 "compare": false, 00:25:01.118 "compare_and_write": false, 00:25:01.118 "abort": false, 00:25:01.118 "seek_hole": false, 00:25:01.118 "seek_data": false, 00:25:01.118 "copy": false, 00:25:01.118 "nvme_iov_md": false 00:25:01.118 }, 00:25:01.118 "memory_domains": [ 00:25:01.118 { 00:25:01.118 "dma_device_id": "system", 00:25:01.118 "dma_device_type": 1 00:25:01.118 }, 00:25:01.118 { 00:25:01.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.118 "dma_device_type": 2 00:25:01.118 }, 00:25:01.118 { 00:25:01.118 "dma_device_id": "system", 00:25:01.118 "dma_device_type": 1 00:25:01.119 }, 00:25:01.119 { 00:25:01.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.119 "dma_device_type": 2 00:25:01.119 } 00:25:01.119 ], 00:25:01.119 "driver_specific": { 00:25:01.119 "raid": { 00:25:01.119 "uuid": "82a4960e-21bb-47f9-b628-e5f5831b1b26", 00:25:01.119 "strip_size_kb": 0, 00:25:01.119 "state": "online", 00:25:01.119 "raid_level": "raid1", 00:25:01.119 "superblock": true, 00:25:01.119 "num_base_bdevs": 2, 00:25:01.119 "num_base_bdevs_discovered": 2, 00:25:01.119 "num_base_bdevs_operational": 2, 00:25:01.119 "base_bdevs_list": [ 00:25:01.119 { 00:25:01.119 "name": "BaseBdev1", 00:25:01.119 "uuid": "5f60aeec-86e9-49a7-b516-ad1a8ccd0a0a", 00:25:01.119 "is_configured": true, 00:25:01.119 "data_offset": 256, 00:25:01.119 "data_size": 7936 00:25:01.119 }, 00:25:01.119 { 00:25:01.119 "name": "BaseBdev2", 00:25:01.119 "uuid": "611c73ab-dd51-40e8-9595-009f286f010f", 00:25:01.119 "is_configured": true, 00:25:01.119 "data_offset": 256, 00:25:01.119 "data_size": 7936 00:25:01.119 } 00:25:01.119 ] 00:25:01.119 } 00:25:01.119 } 00:25:01.119 }' 00:25:01.119 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:01.119 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:01.119 BaseBdev2' 00:25:01.119 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:01.119 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:01.119 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:01.376 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:01.376 "name": "BaseBdev1", 00:25:01.376 "aliases": [ 00:25:01.376 "5f60aeec-86e9-49a7-b516-ad1a8ccd0a0a" 00:25:01.376 ], 00:25:01.376 "product_name": "Malloc disk", 00:25:01.376 "block_size": 4096, 00:25:01.376 "num_blocks": 8192, 00:25:01.376 "uuid": "5f60aeec-86e9-49a7-b516-ad1a8ccd0a0a", 00:25:01.376 "assigned_rate_limits": { 00:25:01.376 "rw_ios_per_sec": 0, 00:25:01.376 "rw_mbytes_per_sec": 0, 00:25:01.376 "r_mbytes_per_sec": 0, 00:25:01.376 "w_mbytes_per_sec": 0 00:25:01.376 }, 00:25:01.376 "claimed": true, 00:25:01.376 "claim_type": "exclusive_write", 00:25:01.376 "zoned": false, 00:25:01.376 "supported_io_types": { 00:25:01.376 "read": true, 00:25:01.376 "write": true, 00:25:01.376 "unmap": true, 00:25:01.376 "flush": true, 00:25:01.376 "reset": true, 00:25:01.376 "nvme_admin": false, 00:25:01.376 "nvme_io": false, 00:25:01.376 "nvme_io_md": false, 00:25:01.376 "write_zeroes": true, 00:25:01.376 "zcopy": true, 00:25:01.376 "get_zone_info": false, 00:25:01.376 "zone_management": false, 00:25:01.376 "zone_append": false, 00:25:01.376 "compare": false, 00:25:01.376 "compare_and_write": false, 00:25:01.376 "abort": true, 00:25:01.376 "seek_hole": false, 00:25:01.376 "seek_data": false, 00:25:01.376 "copy": true, 00:25:01.376 "nvme_iov_md": false 00:25:01.376 }, 00:25:01.376 "memory_domains": [ 00:25:01.376 { 00:25:01.376 "dma_device_id": "system", 00:25:01.376 "dma_device_type": 1 00:25:01.376 }, 00:25:01.376 { 00:25:01.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.376 "dma_device_type": 2 00:25:01.376 } 00:25:01.376 ], 00:25:01.376 "driver_specific": {} 00:25:01.376 }' 00:25:01.376 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:01.376 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:01.376 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:01.376 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:01.376 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:01.376 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:01.377 10:40:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:01.377 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:01.377 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:01.377 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:01.634 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:01.634 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:01.634 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:01.634 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:01.634 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:01.891 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:01.891 "name": "BaseBdev2", 00:25:01.891 "aliases": [ 00:25:01.891 "611c73ab-dd51-40e8-9595-009f286f010f" 00:25:01.891 ], 00:25:01.891 "product_name": "Malloc disk", 00:25:01.891 "block_size": 4096, 00:25:01.891 "num_blocks": 8192, 00:25:01.891 "uuid": "611c73ab-dd51-40e8-9595-009f286f010f", 00:25:01.891 "assigned_rate_limits": { 00:25:01.891 "rw_ios_per_sec": 0, 00:25:01.891 "rw_mbytes_per_sec": 0, 00:25:01.891 "r_mbytes_per_sec": 0, 00:25:01.891 "w_mbytes_per_sec": 0 00:25:01.891 }, 00:25:01.891 "claimed": true, 00:25:01.891 "claim_type": "exclusive_write", 00:25:01.891 "zoned": false, 00:25:01.891 "supported_io_types": { 00:25:01.891 "read": true, 00:25:01.891 "write": true, 00:25:01.891 "unmap": true, 00:25:01.891 "flush": true, 00:25:01.891 "reset": true, 00:25:01.891 "nvme_admin": false, 00:25:01.891 "nvme_io": false, 00:25:01.891 "nvme_io_md": false, 00:25:01.891 "write_zeroes": true, 00:25:01.891 "zcopy": true, 00:25:01.891 "get_zone_info": false, 00:25:01.891 "zone_management": false, 00:25:01.891 "zone_append": false, 00:25:01.891 "compare": false, 00:25:01.891 "compare_and_write": false, 00:25:01.891 "abort": true, 00:25:01.891 "seek_hole": false, 00:25:01.891 "seek_data": false, 00:25:01.891 "copy": true, 00:25:01.891 "nvme_iov_md": false 00:25:01.891 }, 00:25:01.891 "memory_domains": [ 00:25:01.891 { 00:25:01.891 "dma_device_id": "system", 00:25:01.891 "dma_device_type": 1 00:25:01.891 }, 00:25:01.891 { 00:25:01.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.891 "dma_device_type": 2 00:25:01.891 } 00:25:01.891 ], 00:25:01.891 "driver_specific": {} 00:25:01.891 }' 00:25:01.891 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:01.891 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:01.891 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:01.891 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:01.891 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:01.891 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:01.891 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:01.891 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:01.891 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:01.891 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:02.149 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:02.149 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:02.149 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:02.407 [2024-07-25 10:40:05.858633] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.407 10:40:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:02.665 10:40:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:02.665 "name": "Existed_Raid", 00:25:02.665 "uuid": "82a4960e-21bb-47f9-b628-e5f5831b1b26", 00:25:02.665 "strip_size_kb": 0, 00:25:02.665 "state": "online", 00:25:02.665 "raid_level": "raid1", 00:25:02.665 "superblock": true, 00:25:02.665 "num_base_bdevs": 2, 00:25:02.665 "num_base_bdevs_discovered": 1, 00:25:02.665 "num_base_bdevs_operational": 1, 00:25:02.665 "base_bdevs_list": [ 00:25:02.665 { 00:25:02.665 "name": null, 00:25:02.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.665 "is_configured": false, 00:25:02.665 "data_offset": 256, 00:25:02.665 "data_size": 7936 00:25:02.665 }, 00:25:02.665 { 00:25:02.665 "name": "BaseBdev2", 00:25:02.665 "uuid": "611c73ab-dd51-40e8-9595-009f286f010f", 00:25:02.665 "is_configured": true, 00:25:02.665 "data_offset": 256, 00:25:02.665 "data_size": 7936 00:25:02.665 } 00:25:02.665 ] 00:25:02.665 }' 00:25:02.665 10:40:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:02.665 10:40:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:03.230 10:40:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:03.230 10:40:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:03.230 10:40:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.230 10:40:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:03.230 10:40:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:03.230 10:40:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:03.230 10:40:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:03.487 [2024-07-25 10:40:07.180642] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:03.487 [2024-07-25 10:40:07.180746] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:03.487 [2024-07-25 10:40:07.192597] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:03.487 [2024-07-25 10:40:07.192654] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:03.487 [2024-07-25 10:40:07.192668] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ad9b40 name Existed_Raid, state offline 00:25:03.744 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:03.744 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:03.744 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.744 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 2460361 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 2460361 ']' 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 2460361 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2460361 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2460361' 00:25:04.002 killing process with pid 2460361 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 2460361 00:25:04.002 [2024-07-25 10:40:07.524735] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:04.002 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 2460361 00:25:04.002 [2024-07-25 10:40:07.525810] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:04.259 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:25:04.259 00:25:04.259 real 0m9.750s 00:25:04.259 user 0m17.925s 00:25:04.259 sys 0m1.450s 00:25:04.259 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:04.259 10:40:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:04.259 ************************************ 00:25:04.259 END TEST raid_state_function_test_sb_4k 00:25:04.259 ************************************ 00:25:04.259 10:40:07 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:25:04.259 10:40:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:04.259 10:40:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:04.259 10:40:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:04.259 ************************************ 00:25:04.259 START TEST raid_superblock_test_4k 00:25:04.259 ************************************ 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=2461764 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 2461764 /var/tmp/spdk-raid.sock 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 2461764 ']' 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:04.259 10:40:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:04.260 10:40:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:04.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:04.260 10:40:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:04.260 10:40:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:04.260 [2024-07-25 10:40:07.901511] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:25:04.260 [2024-07-25 10:40:07.901589] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2461764 ] 00:25:04.517 [2024-07-25 10:40:07.984335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.517 [2024-07-25 10:40:08.106329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.517 [2024-07-25 10:40:08.176451] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:04.517 [2024-07-25 10:40:08.176500] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:05.448 10:40:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.448 10:40:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:25:05.448 10:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:25:05.448 10:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:05.448 10:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:25:05.448 10:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:25:05.448 10:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:05.448 10:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:05.448 10:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:05.448 10:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:05.448 10:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:25:05.448 malloc1 00:25:05.448 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:05.706 [2024-07-25 10:40:09.352479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:05.706 [2024-07-25 10:40:09.352554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.706 [2024-07-25 10:40:09.352580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x226f2b0 00:25:05.706 [2024-07-25 10:40:09.352593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.706 [2024-07-25 10:40:09.354206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.706 [2024-07-25 10:40:09.354230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:05.706 pt1 00:25:05.706 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:05.706 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:05.706 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:25:05.706 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:25:05.706 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:05.706 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:05.706 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:05.706 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:05.706 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:25:05.964 malloc2 00:25:05.964 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:06.222 [2024-07-25 10:40:09.844953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:06.222 [2024-07-25 10:40:09.845014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.222 [2024-07-25 10:40:09.845031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24221e0 00:25:06.222 [2024-07-25 10:40:09.845043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.222 [2024-07-25 10:40:09.846169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.222 [2024-07-25 10:40:09.846190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:06.222 pt2 00:25:06.222 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:06.222 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:06.222 10:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:25:06.479 [2024-07-25 10:40:10.089658] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:06.479 [2024-07-25 10:40:10.090852] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:06.480 [2024-07-25 10:40:10.091051] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2406df0 00:25:06.480 [2024-07-25 10:40:10.091066] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:06.480 [2024-07-25 10:40:10.091291] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24079b0 00:25:06.480 [2024-07-25 10:40:10.091456] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2406df0 00:25:06.480 [2024-07-25 10:40:10.091469] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2406df0 00:25:06.480 [2024-07-25 10:40:10.091583] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:06.480 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:06.480 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:06.480 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:06.480 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:06.480 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:06.480 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:06.480 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.480 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.480 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.480 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.480 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.480 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.738 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.738 "name": "raid_bdev1", 00:25:06.738 "uuid": "2b695b6b-a3ae-4f50-b036-05bd232b2ad1", 00:25:06.738 "strip_size_kb": 0, 00:25:06.738 "state": "online", 00:25:06.738 "raid_level": "raid1", 00:25:06.738 "superblock": true, 00:25:06.738 "num_base_bdevs": 2, 00:25:06.738 "num_base_bdevs_discovered": 2, 00:25:06.738 "num_base_bdevs_operational": 2, 00:25:06.738 "base_bdevs_list": [ 00:25:06.738 { 00:25:06.738 "name": "pt1", 00:25:06.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:06.738 "is_configured": true, 00:25:06.738 "data_offset": 256, 00:25:06.738 "data_size": 7936 00:25:06.738 }, 00:25:06.738 { 00:25:06.738 "name": "pt2", 00:25:06.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:06.738 "is_configured": true, 00:25:06.738 "data_offset": 256, 00:25:06.738 "data_size": 7936 00:25:06.738 } 00:25:06.738 ] 00:25:06.738 }' 00:25:06.738 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.738 10:40:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:07.303 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:25:07.303 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:07.303 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:07.303 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:07.303 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:07.303 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:25:07.303 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:07.303 10:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:07.561 [2024-07-25 10:40:11.112588] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:07.561 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:07.561 "name": "raid_bdev1", 00:25:07.561 "aliases": [ 00:25:07.561 "2b695b6b-a3ae-4f50-b036-05bd232b2ad1" 00:25:07.561 ], 00:25:07.561 "product_name": "Raid Volume", 00:25:07.561 "block_size": 4096, 00:25:07.561 "num_blocks": 7936, 00:25:07.561 "uuid": "2b695b6b-a3ae-4f50-b036-05bd232b2ad1", 00:25:07.561 "assigned_rate_limits": { 00:25:07.561 "rw_ios_per_sec": 0, 00:25:07.561 "rw_mbytes_per_sec": 0, 00:25:07.561 "r_mbytes_per_sec": 0, 00:25:07.561 "w_mbytes_per_sec": 0 00:25:07.561 }, 00:25:07.561 "claimed": false, 00:25:07.561 "zoned": false, 00:25:07.561 "supported_io_types": { 00:25:07.561 "read": true, 00:25:07.561 "write": true, 00:25:07.561 "unmap": false, 00:25:07.561 "flush": false, 00:25:07.561 "reset": true, 00:25:07.561 "nvme_admin": false, 00:25:07.561 "nvme_io": false, 00:25:07.561 "nvme_io_md": false, 00:25:07.561 "write_zeroes": true, 00:25:07.561 "zcopy": false, 00:25:07.561 "get_zone_info": false, 00:25:07.561 "zone_management": false, 00:25:07.561 "zone_append": false, 00:25:07.561 "compare": false, 00:25:07.561 "compare_and_write": false, 00:25:07.561 "abort": false, 00:25:07.561 "seek_hole": false, 00:25:07.561 "seek_data": false, 00:25:07.561 "copy": false, 00:25:07.561 "nvme_iov_md": false 00:25:07.561 }, 00:25:07.561 "memory_domains": [ 00:25:07.561 { 00:25:07.561 "dma_device_id": "system", 00:25:07.561 "dma_device_type": 1 00:25:07.561 }, 00:25:07.561 { 00:25:07.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.561 "dma_device_type": 2 00:25:07.561 }, 00:25:07.561 { 00:25:07.561 "dma_device_id": "system", 00:25:07.561 "dma_device_type": 1 00:25:07.561 }, 00:25:07.561 { 00:25:07.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.561 "dma_device_type": 2 00:25:07.561 } 00:25:07.561 ], 00:25:07.561 "driver_specific": { 00:25:07.561 "raid": { 00:25:07.561 "uuid": "2b695b6b-a3ae-4f50-b036-05bd232b2ad1", 00:25:07.561 "strip_size_kb": 0, 00:25:07.561 "state": "online", 00:25:07.561 "raid_level": "raid1", 00:25:07.561 "superblock": true, 00:25:07.561 "num_base_bdevs": 2, 00:25:07.561 "num_base_bdevs_discovered": 2, 00:25:07.561 "num_base_bdevs_operational": 2, 00:25:07.561 "base_bdevs_list": [ 00:25:07.561 { 00:25:07.561 "name": "pt1", 00:25:07.561 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:07.561 "is_configured": true, 00:25:07.561 "data_offset": 256, 00:25:07.561 "data_size": 7936 00:25:07.561 }, 00:25:07.561 { 00:25:07.561 "name": "pt2", 00:25:07.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:07.561 "is_configured": true, 00:25:07.561 "data_offset": 256, 00:25:07.561 "data_size": 7936 00:25:07.561 } 00:25:07.561 ] 00:25:07.561 } 00:25:07.561 } 00:25:07.561 }' 00:25:07.561 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:07.561 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:07.561 pt2' 00:25:07.561 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:07.561 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:07.561 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:07.819 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:07.819 "name": "pt1", 00:25:07.819 "aliases": [ 00:25:07.819 "00000000-0000-0000-0000-000000000001" 00:25:07.819 ], 00:25:07.819 "product_name": "passthru", 00:25:07.819 "block_size": 4096, 00:25:07.819 "num_blocks": 8192, 00:25:07.819 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:07.819 "assigned_rate_limits": { 00:25:07.819 "rw_ios_per_sec": 0, 00:25:07.819 "rw_mbytes_per_sec": 0, 00:25:07.819 "r_mbytes_per_sec": 0, 00:25:07.819 "w_mbytes_per_sec": 0 00:25:07.819 }, 00:25:07.819 "claimed": true, 00:25:07.819 "claim_type": "exclusive_write", 00:25:07.819 "zoned": false, 00:25:07.819 "supported_io_types": { 00:25:07.819 "read": true, 00:25:07.819 "write": true, 00:25:07.819 "unmap": true, 00:25:07.819 "flush": true, 00:25:07.819 "reset": true, 00:25:07.819 "nvme_admin": false, 00:25:07.819 "nvme_io": false, 00:25:07.819 "nvme_io_md": false, 00:25:07.819 "write_zeroes": true, 00:25:07.819 "zcopy": true, 00:25:07.819 "get_zone_info": false, 00:25:07.819 "zone_management": false, 00:25:07.819 "zone_append": false, 00:25:07.819 "compare": false, 00:25:07.819 "compare_and_write": false, 00:25:07.819 "abort": true, 00:25:07.819 "seek_hole": false, 00:25:07.819 "seek_data": false, 00:25:07.819 "copy": true, 00:25:07.819 "nvme_iov_md": false 00:25:07.819 }, 00:25:07.819 "memory_domains": [ 00:25:07.819 { 00:25:07.819 "dma_device_id": "system", 00:25:07.819 "dma_device_type": 1 00:25:07.819 }, 00:25:07.819 { 00:25:07.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.819 "dma_device_type": 2 00:25:07.819 } 00:25:07.819 ], 00:25:07.819 "driver_specific": { 00:25:07.819 "passthru": { 00:25:07.819 "name": "pt1", 00:25:07.819 "base_bdev_name": "malloc1" 00:25:07.819 } 00:25:07.819 } 00:25:07.819 }' 00:25:07.819 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:07.819 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:07.819 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:07.819 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:07.819 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:08.077 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:08.077 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.077 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.077 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:08.077 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.077 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.077 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:08.077 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:08.077 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:08.077 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:08.334 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:08.334 "name": "pt2", 00:25:08.334 "aliases": [ 00:25:08.334 "00000000-0000-0000-0000-000000000002" 00:25:08.334 ], 00:25:08.334 "product_name": "passthru", 00:25:08.334 "block_size": 4096, 00:25:08.334 "num_blocks": 8192, 00:25:08.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:08.334 "assigned_rate_limits": { 00:25:08.334 "rw_ios_per_sec": 0, 00:25:08.334 "rw_mbytes_per_sec": 0, 00:25:08.334 "r_mbytes_per_sec": 0, 00:25:08.334 "w_mbytes_per_sec": 0 00:25:08.334 }, 00:25:08.334 "claimed": true, 00:25:08.334 "claim_type": "exclusive_write", 00:25:08.334 "zoned": false, 00:25:08.334 "supported_io_types": { 00:25:08.334 "read": true, 00:25:08.334 "write": true, 00:25:08.334 "unmap": true, 00:25:08.334 "flush": true, 00:25:08.334 "reset": true, 00:25:08.334 "nvme_admin": false, 00:25:08.334 "nvme_io": false, 00:25:08.334 "nvme_io_md": false, 00:25:08.334 "write_zeroes": true, 00:25:08.334 "zcopy": true, 00:25:08.334 "get_zone_info": false, 00:25:08.334 "zone_management": false, 00:25:08.334 "zone_append": false, 00:25:08.334 "compare": false, 00:25:08.334 "compare_and_write": false, 00:25:08.334 "abort": true, 00:25:08.334 "seek_hole": false, 00:25:08.334 "seek_data": false, 00:25:08.334 "copy": true, 00:25:08.334 "nvme_iov_md": false 00:25:08.334 }, 00:25:08.334 "memory_domains": [ 00:25:08.334 { 00:25:08.334 "dma_device_id": "system", 00:25:08.334 "dma_device_type": 1 00:25:08.334 }, 00:25:08.334 { 00:25:08.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.334 "dma_device_type": 2 00:25:08.334 } 00:25:08.334 ], 00:25:08.334 "driver_specific": { 00:25:08.334 "passthru": { 00:25:08.334 "name": "pt2", 00:25:08.334 "base_bdev_name": "malloc2" 00:25:08.334 } 00:25:08.334 } 00:25:08.334 }' 00:25:08.334 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:08.334 10:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:08.335 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:08.335 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:08.592 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:08.592 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:08.592 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.592 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.592 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:08.592 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.592 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.592 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:08.592 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:08.592 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:25:08.850 [2024-07-25 10:40:12.476221] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:08.850 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2b695b6b-a3ae-4f50-b036-05bd232b2ad1 00:25:08.850 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 2b695b6b-a3ae-4f50-b036-05bd232b2ad1 ']' 00:25:08.850 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:09.108 [2024-07-25 10:40:12.728643] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:09.108 [2024-07-25 10:40:12.728674] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:09.108 [2024-07-25 10:40:12.728759] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:09.108 [2024-07-25 10:40:12.728829] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:09.108 [2024-07-25 10:40:12.728843] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2406df0 name raid_bdev1, state offline 00:25:09.108 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.108 10:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:25:09.365 10:40:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:25:09.365 10:40:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:25:09.365 10:40:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:09.365 10:40:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:09.623 10:40:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:09.623 10:40:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:09.881 10:40:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:09.881 10:40:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:10.138 10:40:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:25:10.138 10:40:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:10.138 10:40:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:25:10.138 10:40:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:10.138 10:40:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:10.138 10:40:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.138 10:40:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:10.138 10:40:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.138 10:40:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:10.138 10:40:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.138 10:40:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:10.138 10:40:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:10.138 10:40:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:10.396 [2024-07-25 10:40:14.056087] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:10.396 [2024-07-25 10:40:14.057391] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:10.396 [2024-07-25 10:40:14.057469] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:10.396 [2024-07-25 10:40:14.057533] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:10.396 [2024-07-25 10:40:14.057556] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:10.396 [2024-07-25 10:40:14.057565] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2270b80 name raid_bdev1, state configuring 00:25:10.396 request: 00:25:10.396 { 00:25:10.396 "name": "raid_bdev1", 00:25:10.396 "raid_level": "raid1", 00:25:10.396 "base_bdevs": [ 00:25:10.396 "malloc1", 00:25:10.396 "malloc2" 00:25:10.396 ], 00:25:10.396 "superblock": false, 00:25:10.396 "method": "bdev_raid_create", 00:25:10.396 "req_id": 1 00:25:10.396 } 00:25:10.396 Got JSON-RPC error response 00:25:10.396 response: 00:25:10.396 { 00:25:10.396 "code": -17, 00:25:10.396 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:10.396 } 00:25:10.396 10:40:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:25:10.396 10:40:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:10.396 10:40:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:10.396 10:40:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:10.396 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.396 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:25:10.653 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:25:10.653 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:25:10.653 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:10.911 [2024-07-25 10:40:14.553367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:10.911 [2024-07-25 10:40:14.553454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.911 [2024-07-25 10:40:14.553477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24077b0 00:25:10.911 [2024-07-25 10:40:14.553490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.911 [2024-07-25 10:40:14.555070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.911 [2024-07-25 10:40:14.555117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:10.911 [2024-07-25 10:40:14.555218] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:10.911 [2024-07-25 10:40:14.555253] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:10.911 pt1 00:25:10.911 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:10.911 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:10.911 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:10.911 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:10.911 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:10.911 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:10.911 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:10.911 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:10.911 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:10.911 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:10.911 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.911 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.168 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:11.169 "name": "raid_bdev1", 00:25:11.169 "uuid": "2b695b6b-a3ae-4f50-b036-05bd232b2ad1", 00:25:11.169 "strip_size_kb": 0, 00:25:11.169 "state": "configuring", 00:25:11.169 "raid_level": "raid1", 00:25:11.169 "superblock": true, 00:25:11.169 "num_base_bdevs": 2, 00:25:11.169 "num_base_bdevs_discovered": 1, 00:25:11.169 "num_base_bdevs_operational": 2, 00:25:11.169 "base_bdevs_list": [ 00:25:11.169 { 00:25:11.169 "name": "pt1", 00:25:11.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:11.169 "is_configured": true, 00:25:11.169 "data_offset": 256, 00:25:11.169 "data_size": 7936 00:25:11.169 }, 00:25:11.169 { 00:25:11.169 "name": null, 00:25:11.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:11.169 "is_configured": false, 00:25:11.169 "data_offset": 256, 00:25:11.169 "data_size": 7936 00:25:11.169 } 00:25:11.169 ] 00:25:11.169 }' 00:25:11.169 10:40:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:11.169 10:40:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:11.732 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:25:11.732 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:25:11.732 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:11.732 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:11.989 [2024-07-25 10:40:15.543991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:11.989 [2024-07-25 10:40:15.544063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.989 [2024-07-25 10:40:15.544085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x226e8c0 00:25:11.989 [2024-07-25 10:40:15.544097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.989 [2024-07-25 10:40:15.544553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.989 [2024-07-25 10:40:15.544574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:11.989 [2024-07-25 10:40:15.544649] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:11.989 [2024-07-25 10:40:15.544672] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:11.989 [2024-07-25 10:40:15.544777] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x226e3d0 00:25:11.989 [2024-07-25 10:40:15.544790] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:11.989 [2024-07-25 10:40:15.544938] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2406dc0 00:25:11.989 [2024-07-25 10:40:15.545066] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x226e3d0 00:25:11.989 [2024-07-25 10:40:15.545093] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x226e3d0 00:25:11.989 [2024-07-25 10:40:15.545208] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:11.989 pt2 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.989 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.247 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:12.247 "name": "raid_bdev1", 00:25:12.247 "uuid": "2b695b6b-a3ae-4f50-b036-05bd232b2ad1", 00:25:12.247 "strip_size_kb": 0, 00:25:12.247 "state": "online", 00:25:12.247 "raid_level": "raid1", 00:25:12.247 "superblock": true, 00:25:12.247 "num_base_bdevs": 2, 00:25:12.247 "num_base_bdevs_discovered": 2, 00:25:12.247 "num_base_bdevs_operational": 2, 00:25:12.247 "base_bdevs_list": [ 00:25:12.247 { 00:25:12.247 "name": "pt1", 00:25:12.247 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:12.247 "is_configured": true, 00:25:12.247 "data_offset": 256, 00:25:12.247 "data_size": 7936 00:25:12.247 }, 00:25:12.247 { 00:25:12.247 "name": "pt2", 00:25:12.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:12.247 "is_configured": true, 00:25:12.247 "data_offset": 256, 00:25:12.247 "data_size": 7936 00:25:12.247 } 00:25:12.247 ] 00:25:12.247 }' 00:25:12.247 10:40:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:12.247 10:40:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:12.840 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:25:12.840 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:12.840 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:12.840 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:12.840 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:12.840 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:25:12.840 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:12.840 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:13.097 [2024-07-25 10:40:16.546875] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:13.098 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:13.098 "name": "raid_bdev1", 00:25:13.098 "aliases": [ 00:25:13.098 "2b695b6b-a3ae-4f50-b036-05bd232b2ad1" 00:25:13.098 ], 00:25:13.098 "product_name": "Raid Volume", 00:25:13.098 "block_size": 4096, 00:25:13.098 "num_blocks": 7936, 00:25:13.098 "uuid": "2b695b6b-a3ae-4f50-b036-05bd232b2ad1", 00:25:13.098 "assigned_rate_limits": { 00:25:13.098 "rw_ios_per_sec": 0, 00:25:13.098 "rw_mbytes_per_sec": 0, 00:25:13.098 "r_mbytes_per_sec": 0, 00:25:13.098 "w_mbytes_per_sec": 0 00:25:13.098 }, 00:25:13.098 "claimed": false, 00:25:13.098 "zoned": false, 00:25:13.098 "supported_io_types": { 00:25:13.098 "read": true, 00:25:13.098 "write": true, 00:25:13.098 "unmap": false, 00:25:13.098 "flush": false, 00:25:13.098 "reset": true, 00:25:13.098 "nvme_admin": false, 00:25:13.098 "nvme_io": false, 00:25:13.098 "nvme_io_md": false, 00:25:13.098 "write_zeroes": true, 00:25:13.098 "zcopy": false, 00:25:13.098 "get_zone_info": false, 00:25:13.098 "zone_management": false, 00:25:13.098 "zone_append": false, 00:25:13.098 "compare": false, 00:25:13.098 "compare_and_write": false, 00:25:13.098 "abort": false, 00:25:13.098 "seek_hole": false, 00:25:13.098 "seek_data": false, 00:25:13.098 "copy": false, 00:25:13.098 "nvme_iov_md": false 00:25:13.098 }, 00:25:13.098 "memory_domains": [ 00:25:13.098 { 00:25:13.098 "dma_device_id": "system", 00:25:13.098 "dma_device_type": 1 00:25:13.098 }, 00:25:13.098 { 00:25:13.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.098 "dma_device_type": 2 00:25:13.098 }, 00:25:13.098 { 00:25:13.098 "dma_device_id": "system", 00:25:13.098 "dma_device_type": 1 00:25:13.098 }, 00:25:13.098 { 00:25:13.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.098 "dma_device_type": 2 00:25:13.098 } 00:25:13.098 ], 00:25:13.098 "driver_specific": { 00:25:13.098 "raid": { 00:25:13.098 "uuid": "2b695b6b-a3ae-4f50-b036-05bd232b2ad1", 00:25:13.098 "strip_size_kb": 0, 00:25:13.098 "state": "online", 00:25:13.098 "raid_level": "raid1", 00:25:13.098 "superblock": true, 00:25:13.098 "num_base_bdevs": 2, 00:25:13.098 "num_base_bdevs_discovered": 2, 00:25:13.098 "num_base_bdevs_operational": 2, 00:25:13.098 "base_bdevs_list": [ 00:25:13.098 { 00:25:13.098 "name": "pt1", 00:25:13.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:13.098 "is_configured": true, 00:25:13.098 "data_offset": 256, 00:25:13.098 "data_size": 7936 00:25:13.098 }, 00:25:13.098 { 00:25:13.098 "name": "pt2", 00:25:13.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:13.098 "is_configured": true, 00:25:13.098 "data_offset": 256, 00:25:13.098 "data_size": 7936 00:25:13.098 } 00:25:13.098 ] 00:25:13.098 } 00:25:13.098 } 00:25:13.098 }' 00:25:13.098 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:13.098 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:13.098 pt2' 00:25:13.098 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:13.098 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:13.098 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:13.356 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:13.356 "name": "pt1", 00:25:13.356 "aliases": [ 00:25:13.356 "00000000-0000-0000-0000-000000000001" 00:25:13.356 ], 00:25:13.356 "product_name": "passthru", 00:25:13.356 "block_size": 4096, 00:25:13.356 "num_blocks": 8192, 00:25:13.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:13.356 "assigned_rate_limits": { 00:25:13.356 "rw_ios_per_sec": 0, 00:25:13.356 "rw_mbytes_per_sec": 0, 00:25:13.356 "r_mbytes_per_sec": 0, 00:25:13.356 "w_mbytes_per_sec": 0 00:25:13.356 }, 00:25:13.356 "claimed": true, 00:25:13.356 "claim_type": "exclusive_write", 00:25:13.356 "zoned": false, 00:25:13.356 "supported_io_types": { 00:25:13.356 "read": true, 00:25:13.356 "write": true, 00:25:13.356 "unmap": true, 00:25:13.356 "flush": true, 00:25:13.356 "reset": true, 00:25:13.356 "nvme_admin": false, 00:25:13.356 "nvme_io": false, 00:25:13.356 "nvme_io_md": false, 00:25:13.356 "write_zeroes": true, 00:25:13.356 "zcopy": true, 00:25:13.356 "get_zone_info": false, 00:25:13.356 "zone_management": false, 00:25:13.356 "zone_append": false, 00:25:13.356 "compare": false, 00:25:13.356 "compare_and_write": false, 00:25:13.356 "abort": true, 00:25:13.356 "seek_hole": false, 00:25:13.356 "seek_data": false, 00:25:13.356 "copy": true, 00:25:13.356 "nvme_iov_md": false 00:25:13.356 }, 00:25:13.356 "memory_domains": [ 00:25:13.356 { 00:25:13.356 "dma_device_id": "system", 00:25:13.356 "dma_device_type": 1 00:25:13.356 }, 00:25:13.356 { 00:25:13.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.356 "dma_device_type": 2 00:25:13.356 } 00:25:13.356 ], 00:25:13.356 "driver_specific": { 00:25:13.356 "passthru": { 00:25:13.356 "name": "pt1", 00:25:13.356 "base_bdev_name": "malloc1" 00:25:13.356 } 00:25:13.356 } 00:25:13.356 }' 00:25:13.356 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.356 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.356 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:13.356 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.356 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.356 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:13.356 10:40:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.356 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.356 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.356 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.613 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.613 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.613 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:13.613 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:13.613 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:13.871 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:13.871 "name": "pt2", 00:25:13.871 "aliases": [ 00:25:13.871 "00000000-0000-0000-0000-000000000002" 00:25:13.871 ], 00:25:13.871 "product_name": "passthru", 00:25:13.871 "block_size": 4096, 00:25:13.871 "num_blocks": 8192, 00:25:13.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:13.871 "assigned_rate_limits": { 00:25:13.871 "rw_ios_per_sec": 0, 00:25:13.871 "rw_mbytes_per_sec": 0, 00:25:13.871 "r_mbytes_per_sec": 0, 00:25:13.871 "w_mbytes_per_sec": 0 00:25:13.871 }, 00:25:13.871 "claimed": true, 00:25:13.871 "claim_type": "exclusive_write", 00:25:13.871 "zoned": false, 00:25:13.871 "supported_io_types": { 00:25:13.871 "read": true, 00:25:13.871 "write": true, 00:25:13.871 "unmap": true, 00:25:13.871 "flush": true, 00:25:13.871 "reset": true, 00:25:13.871 "nvme_admin": false, 00:25:13.871 "nvme_io": false, 00:25:13.871 "nvme_io_md": false, 00:25:13.871 "write_zeroes": true, 00:25:13.871 "zcopy": true, 00:25:13.871 "get_zone_info": false, 00:25:13.871 "zone_management": false, 00:25:13.871 "zone_append": false, 00:25:13.871 "compare": false, 00:25:13.871 "compare_and_write": false, 00:25:13.871 "abort": true, 00:25:13.871 "seek_hole": false, 00:25:13.871 "seek_data": false, 00:25:13.871 "copy": true, 00:25:13.871 "nvme_iov_md": false 00:25:13.871 }, 00:25:13.871 "memory_domains": [ 00:25:13.871 { 00:25:13.871 "dma_device_id": "system", 00:25:13.871 "dma_device_type": 1 00:25:13.871 }, 00:25:13.871 { 00:25:13.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.871 "dma_device_type": 2 00:25:13.871 } 00:25:13.871 ], 00:25:13.871 "driver_specific": { 00:25:13.871 "passthru": { 00:25:13.871 "name": "pt2", 00:25:13.871 "base_bdev_name": "malloc2" 00:25:13.871 } 00:25:13.871 } 00:25:13.871 }' 00:25:13.871 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.871 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.871 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:13.871 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.871 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.871 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:13.871 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.871 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.130 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:14.130 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:14.130 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:14.130 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:14.130 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:14.130 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:25:14.386 [2024-07-25 10:40:17.874494] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:14.386 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 2b695b6b-a3ae-4f50-b036-05bd232b2ad1 '!=' 2b695b6b-a3ae-4f50-b036-05bd232b2ad1 ']' 00:25:14.386 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:25:14.386 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:14.386 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:25:14.386 10:40:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:14.643 [2024-07-25 10:40:18.114914] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:14.643 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:14.643 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:14.643 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:14.643 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:14.643 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:14.643 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:14.643 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:14.643 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:14.643 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:14.643 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:14.643 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.643 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.901 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.901 "name": "raid_bdev1", 00:25:14.901 "uuid": "2b695b6b-a3ae-4f50-b036-05bd232b2ad1", 00:25:14.901 "strip_size_kb": 0, 00:25:14.901 "state": "online", 00:25:14.901 "raid_level": "raid1", 00:25:14.901 "superblock": true, 00:25:14.901 "num_base_bdevs": 2, 00:25:14.901 "num_base_bdevs_discovered": 1, 00:25:14.901 "num_base_bdevs_operational": 1, 00:25:14.901 "base_bdevs_list": [ 00:25:14.901 { 00:25:14.901 "name": null, 00:25:14.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.901 "is_configured": false, 00:25:14.901 "data_offset": 256, 00:25:14.901 "data_size": 7936 00:25:14.901 }, 00:25:14.901 { 00:25:14.901 "name": "pt2", 00:25:14.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:14.901 "is_configured": true, 00:25:14.901 "data_offset": 256, 00:25:14.901 "data_size": 7936 00:25:14.901 } 00:25:14.901 ] 00:25:14.901 }' 00:25:14.901 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.901 10:40:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:15.464 10:40:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:15.464 [2024-07-25 10:40:19.153626] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:15.464 [2024-07-25 10:40:19.153661] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:15.464 [2024-07-25 10:40:19.153741] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:15.464 [2024-07-25 10:40:19.153800] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:15.464 [2024-07-25 10:40:19.153816] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x226e3d0 name raid_bdev1, state offline 00:25:15.722 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.722 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:25:15.722 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:25:15.722 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:25:15.722 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:25:15.722 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:15.722 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:15.979 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:25:15.979 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:15.979 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:25:15.979 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:25:15.979 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:25:15.979 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:16.236 [2024-07-25 10:40:19.879511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:16.236 [2024-07-25 10:40:19.879580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.237 [2024-07-25 10:40:19.879600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x226e650 00:25:16.237 [2024-07-25 10:40:19.879618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.237 [2024-07-25 10:40:19.881080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.237 [2024-07-25 10:40:19.881126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:16.237 [2024-07-25 10:40:19.881204] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:16.237 [2024-07-25 10:40:19.881237] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:16.237 [2024-07-25 10:40:19.881335] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2407210 00:25:16.237 [2024-07-25 10:40:19.881349] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:16.237 [2024-07-25 10:40:19.881527] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2409a50 00:25:16.237 [2024-07-25 10:40:19.881646] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2407210 00:25:16.237 [2024-07-25 10:40:19.881659] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2407210 00:25:16.237 [2024-07-25 10:40:19.881747] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.237 pt2 00:25:16.237 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:16.237 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:16.237 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:16.237 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:16.237 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:16.237 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:16.237 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:16.237 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:16.237 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:16.237 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:16.237 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.237 10:40:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.494 10:40:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:16.494 "name": "raid_bdev1", 00:25:16.494 "uuid": "2b695b6b-a3ae-4f50-b036-05bd232b2ad1", 00:25:16.494 "strip_size_kb": 0, 00:25:16.494 "state": "online", 00:25:16.494 "raid_level": "raid1", 00:25:16.494 "superblock": true, 00:25:16.494 "num_base_bdevs": 2, 00:25:16.494 "num_base_bdevs_discovered": 1, 00:25:16.494 "num_base_bdevs_operational": 1, 00:25:16.494 "base_bdevs_list": [ 00:25:16.494 { 00:25:16.494 "name": null, 00:25:16.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.494 "is_configured": false, 00:25:16.494 "data_offset": 256, 00:25:16.494 "data_size": 7936 00:25:16.494 }, 00:25:16.494 { 00:25:16.494 "name": "pt2", 00:25:16.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:16.494 "is_configured": true, 00:25:16.494 "data_offset": 256, 00:25:16.494 "data_size": 7936 00:25:16.494 } 00:25:16.494 ] 00:25:16.494 }' 00:25:16.494 10:40:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:16.494 10:40:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:17.059 10:40:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:17.316 [2024-07-25 10:40:20.898216] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:17.316 [2024-07-25 10:40:20.898242] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:17.316 [2024-07-25 10:40:20.898314] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:17.316 [2024-07-25 10:40:20.898368] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:17.316 [2024-07-25 10:40:20.898386] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2407210 name raid_bdev1, state offline 00:25:17.316 10:40:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.316 10:40:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:25:17.573 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:25:17.573 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:25:17.573 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:25:17.573 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:17.831 [2024-07-25 10:40:21.387558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:17.831 [2024-07-25 10:40:21.387611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.831 [2024-07-25 10:40:21.387634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2422410 00:25:17.831 [2024-07-25 10:40:21.387649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.831 [2024-07-25 10:40:21.389421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.831 [2024-07-25 10:40:21.389449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:17.831 [2024-07-25 10:40:21.389529] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:17.831 [2024-07-25 10:40:21.389567] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:17.831 [2024-07-25 10:40:21.389694] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:17.831 [2024-07-25 10:40:21.389713] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:17.831 [2024-07-25 10:40:21.389727] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2409130 name raid_bdev1, state configuring 00:25:17.831 [2024-07-25 10:40:21.389754] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:17.831 [2024-07-25 10:40:21.389831] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2266d90 00:25:17.831 [2024-07-25 10:40:21.389847] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:17.831 [2024-07-25 10:40:21.390013] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24086a0 00:25:17.831 [2024-07-25 10:40:21.390179] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2266d90 00:25:17.831 [2024-07-25 10:40:21.390196] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2266d90 00:25:17.831 [2024-07-25 10:40:21.390306] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:17.831 pt1 00:25:17.831 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:25:17.831 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:17.831 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:17.831 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:17.831 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:17.831 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:17.831 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:17.831 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:17.831 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:17.831 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:17.831 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:17.831 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.831 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.088 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:18.088 "name": "raid_bdev1", 00:25:18.088 "uuid": "2b695b6b-a3ae-4f50-b036-05bd232b2ad1", 00:25:18.088 "strip_size_kb": 0, 00:25:18.088 "state": "online", 00:25:18.088 "raid_level": "raid1", 00:25:18.088 "superblock": true, 00:25:18.088 "num_base_bdevs": 2, 00:25:18.088 "num_base_bdevs_discovered": 1, 00:25:18.088 "num_base_bdevs_operational": 1, 00:25:18.088 "base_bdevs_list": [ 00:25:18.088 { 00:25:18.088 "name": null, 00:25:18.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.088 "is_configured": false, 00:25:18.088 "data_offset": 256, 00:25:18.088 "data_size": 7936 00:25:18.088 }, 00:25:18.088 { 00:25:18.088 "name": "pt2", 00:25:18.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:18.088 "is_configured": true, 00:25:18.088 "data_offset": 256, 00:25:18.088 "data_size": 7936 00:25:18.088 } 00:25:18.088 ] 00:25:18.088 }' 00:25:18.088 10:40:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:18.088 10:40:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:18.652 10:40:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:25:18.653 10:40:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:18.910 10:40:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:25:18.910 10:40:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:18.910 10:40:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:25:19.167 [2024-07-25 10:40:22.651161] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:19.167 10:40:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 2b695b6b-a3ae-4f50-b036-05bd232b2ad1 '!=' 2b695b6b-a3ae-4f50-b036-05bd232b2ad1 ']' 00:25:19.167 10:40:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 2461764 00:25:19.167 10:40:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 2461764 ']' 00:25:19.167 10:40:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 2461764 00:25:19.167 10:40:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:25:19.167 10:40:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:19.167 10:40:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2461764 00:25:19.167 10:40:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:19.167 10:40:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:19.167 10:40:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2461764' 00:25:19.167 killing process with pid 2461764 00:25:19.167 10:40:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 2461764 00:25:19.167 [2024-07-25 10:40:22.702280] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:19.167 10:40:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 2461764 00:25:19.167 [2024-07-25 10:40:22.702356] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:19.167 [2024-07-25 10:40:22.702433] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:19.167 [2024-07-25 10:40:22.702447] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2266d90 name raid_bdev1, state offline 00:25:19.167 [2024-07-25 10:40:22.725354] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:19.426 10:40:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:25:19.426 00:25:19.426 real 0m15.163s 00:25:19.426 user 0m27.867s 00:25:19.426 sys 0m2.153s 00:25:19.426 10:40:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:19.426 10:40:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:19.426 ************************************ 00:25:19.426 END TEST raid_superblock_test_4k 00:25:19.426 ************************************ 00:25:19.426 10:40:23 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:25:19.426 10:40:23 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:25:19.426 10:40:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:25:19.426 10:40:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:19.426 10:40:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:19.426 ************************************ 00:25:19.426 START TEST raid_rebuild_test_sb_4k 00:25:19.426 ************************************ 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:25:19.426 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:25:19.427 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:25:19.427 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:25:19.427 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=2463893 00:25:19.427 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:19.427 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 2463893 /var/tmp/spdk-raid.sock 00:25:19.427 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 2463893 ']' 00:25:19.427 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:19.427 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:19.427 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:19.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:19.427 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:19.427 10:40:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:19.427 [2024-07-25 10:40:23.115354] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:25:19.427 [2024-07-25 10:40:23.115442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463893 ] 00:25:19.427 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:19.427 Zero copy mechanism will not be used. 00:25:19.684 [2024-07-25 10:40:23.201405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.684 [2024-07-25 10:40:23.324460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.941 [2024-07-25 10:40:23.397406] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:19.941 [2024-07-25 10:40:23.397448] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:20.506 10:40:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:20.506 10:40:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:25:20.506 10:40:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:20.506 10:40:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:25:20.764 BaseBdev1_malloc 00:25:20.764 10:40:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:21.022 [2024-07-25 10:40:24.584370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:21.022 [2024-07-25 10:40:24.584442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.022 [2024-07-25 10:40:24.584469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f6430 00:25:21.022 [2024-07-25 10:40:24.584485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.022 [2024-07-25 10:40:24.586000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.022 [2024-07-25 10:40:24.586027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:21.022 BaseBdev1 00:25:21.022 10:40:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:21.022 10:40:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:25:21.279 BaseBdev2_malloc 00:25:21.279 10:40:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:21.535 [2024-07-25 10:40:25.082399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:21.535 [2024-07-25 10:40:25.082460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.535 [2024-07-25 10:40:25.082491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb99a20 00:25:21.535 [2024-07-25 10:40:25.082507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.535 [2024-07-25 10:40:25.083944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.535 [2024-07-25 10:40:25.083972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:21.535 BaseBdev2 00:25:21.535 10:40:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:25:21.793 spare_malloc 00:25:21.793 10:40:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:22.050 spare_delay 00:25:22.050 10:40:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:22.308 [2024-07-25 10:40:25.847356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:22.308 [2024-07-25 10:40:25.847409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.308 [2024-07-25 10:40:25.847435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9ee070 00:25:22.308 [2024-07-25 10:40:25.847450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.308 [2024-07-25 10:40:25.848853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.308 [2024-07-25 10:40:25.848880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:22.308 spare 00:25:22.308 10:40:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:25:22.565 [2024-07-25 10:40:26.088020] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:22.565 [2024-07-25 10:40:26.089457] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:22.565 [2024-07-25 10:40:26.089636] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xb91010 00:25:22.565 [2024-07-25 10:40:26.089654] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:22.565 [2024-07-25 10:40:26.089825] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9edd20 00:25:22.565 [2024-07-25 10:40:26.089997] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb91010 00:25:22.565 [2024-07-25 10:40:26.090013] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb91010 00:25:22.565 [2024-07-25 10:40:26.090128] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:22.565 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:22.565 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:22.565 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:22.565 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:22.566 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:22.566 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:22.566 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:22.566 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:22.566 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:22.566 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:22.566 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.566 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.823 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.823 "name": "raid_bdev1", 00:25:22.823 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:22.823 "strip_size_kb": 0, 00:25:22.823 "state": "online", 00:25:22.823 "raid_level": "raid1", 00:25:22.823 "superblock": true, 00:25:22.823 "num_base_bdevs": 2, 00:25:22.823 "num_base_bdevs_discovered": 2, 00:25:22.823 "num_base_bdevs_operational": 2, 00:25:22.823 "base_bdevs_list": [ 00:25:22.823 { 00:25:22.823 "name": "BaseBdev1", 00:25:22.823 "uuid": "0750e25a-3c26-5f70-adc0-0bce87ec99f9", 00:25:22.823 "is_configured": true, 00:25:22.823 "data_offset": 256, 00:25:22.823 "data_size": 7936 00:25:22.823 }, 00:25:22.823 { 00:25:22.823 "name": "BaseBdev2", 00:25:22.823 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:22.823 "is_configured": true, 00:25:22.823 "data_offset": 256, 00:25:22.823 "data_size": 7936 00:25:22.823 } 00:25:22.823 ] 00:25:22.823 }' 00:25:22.823 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.823 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:23.387 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:23.388 10:40:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:25:23.645 [2024-07-25 10:40:27.126990] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:23.645 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:25:23.645 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.645 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:23.902 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:24.160 [2024-07-25 10:40:27.632187] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb90090 00:25:24.160 /dev/nbd0 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:24.160 1+0 records in 00:25:24.160 1+0 records out 00:25:24.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280999 s, 14.6 MB/s 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:25:24.160 10:40:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:25:24.725 7936+0 records in 00:25:24.725 7936+0 records out 00:25:24.725 32505856 bytes (33 MB, 31 MiB) copied, 0.711224 s, 45.7 MB/s 00:25:24.725 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:24.725 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:24.725 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:24.725 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:24.725 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:25:24.725 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:24.725 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:24.983 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:24.983 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:24.983 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:24.983 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:24.983 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:24.983 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:24.983 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:24.983 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:24.983 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:24.983 [2024-07-25 10:40:28.676393] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.240 [2024-07-25 10:40:28.901030] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:25.240 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:25.240 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:25.240 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:25.240 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:25.240 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:25.240 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:25.240 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.240 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.240 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.240 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.240 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.240 10:40:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.498 10:40:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.498 "name": "raid_bdev1", 00:25:25.498 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:25.498 "strip_size_kb": 0, 00:25:25.498 "state": "online", 00:25:25.498 "raid_level": "raid1", 00:25:25.498 "superblock": true, 00:25:25.498 "num_base_bdevs": 2, 00:25:25.498 "num_base_bdevs_discovered": 1, 00:25:25.498 "num_base_bdevs_operational": 1, 00:25:25.498 "base_bdevs_list": [ 00:25:25.498 { 00:25:25.498 "name": null, 00:25:25.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.498 "is_configured": false, 00:25:25.498 "data_offset": 256, 00:25:25.498 "data_size": 7936 00:25:25.498 }, 00:25:25.498 { 00:25:25.498 "name": "BaseBdev2", 00:25:25.498 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:25.498 "is_configured": true, 00:25:25.498 "data_offset": 256, 00:25:25.498 "data_size": 7936 00:25:25.498 } 00:25:25.498 ] 00:25:25.498 }' 00:25:25.498 10:40:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.498 10:40:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:26.063 10:40:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:26.321 [2024-07-25 10:40:29.951848] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:26.321 [2024-07-25 10:40:29.958488] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb8ff70 00:25:26.321 [2024-07-25 10:40:29.960720] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:26.321 10:40:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:25:27.694 10:40:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:27.694 10:40:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:27.694 10:40:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:27.694 10:40:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:27.694 10:40:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:27.694 10:40:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.694 10:40:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.694 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:27.694 "name": "raid_bdev1", 00:25:27.694 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:27.694 "strip_size_kb": 0, 00:25:27.694 "state": "online", 00:25:27.694 "raid_level": "raid1", 00:25:27.694 "superblock": true, 00:25:27.694 "num_base_bdevs": 2, 00:25:27.694 "num_base_bdevs_discovered": 2, 00:25:27.694 "num_base_bdevs_operational": 2, 00:25:27.694 "process": { 00:25:27.694 "type": "rebuild", 00:25:27.694 "target": "spare", 00:25:27.694 "progress": { 00:25:27.694 "blocks": 3072, 00:25:27.694 "percent": 38 00:25:27.694 } 00:25:27.694 }, 00:25:27.694 "base_bdevs_list": [ 00:25:27.694 { 00:25:27.694 "name": "spare", 00:25:27.694 "uuid": "bb26fb63-7019-5846-a880-dce0a0b65c44", 00:25:27.694 "is_configured": true, 00:25:27.694 "data_offset": 256, 00:25:27.694 "data_size": 7936 00:25:27.694 }, 00:25:27.694 { 00:25:27.694 "name": "BaseBdev2", 00:25:27.694 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:27.694 "is_configured": true, 00:25:27.694 "data_offset": 256, 00:25:27.694 "data_size": 7936 00:25:27.694 } 00:25:27.694 ] 00:25:27.694 }' 00:25:27.694 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:27.694 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:27.694 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:27.694 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:27.694 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:27.952 [2024-07-25 10:40:31.527111] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:27.952 [2024-07-25 10:40:31.574338] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:27.952 [2024-07-25 10:40:31.574395] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.952 [2024-07-25 10:40:31.574413] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:27.952 [2024-07-25 10:40:31.574422] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:27.952 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:27.952 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:27.952 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:27.952 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:27.952 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:27.952 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:27.952 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:27.952 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:27.952 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:27.952 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:27.952 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.952 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.210 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:28.210 "name": "raid_bdev1", 00:25:28.210 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:28.210 "strip_size_kb": 0, 00:25:28.210 "state": "online", 00:25:28.210 "raid_level": "raid1", 00:25:28.210 "superblock": true, 00:25:28.210 "num_base_bdevs": 2, 00:25:28.210 "num_base_bdevs_discovered": 1, 00:25:28.210 "num_base_bdevs_operational": 1, 00:25:28.210 "base_bdevs_list": [ 00:25:28.210 { 00:25:28.210 "name": null, 00:25:28.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.210 "is_configured": false, 00:25:28.210 "data_offset": 256, 00:25:28.210 "data_size": 7936 00:25:28.210 }, 00:25:28.210 { 00:25:28.210 "name": "BaseBdev2", 00:25:28.210 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:28.210 "is_configured": true, 00:25:28.210 "data_offset": 256, 00:25:28.210 "data_size": 7936 00:25:28.210 } 00:25:28.210 ] 00:25:28.210 }' 00:25:28.210 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:28.210 10:40:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:28.774 10:40:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:28.774 10:40:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:28.774 10:40:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:28.774 10:40:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:28.774 10:40:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:28.774 10:40:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.774 10:40:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.032 10:40:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:29.032 "name": "raid_bdev1", 00:25:29.032 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:29.032 "strip_size_kb": 0, 00:25:29.032 "state": "online", 00:25:29.032 "raid_level": "raid1", 00:25:29.032 "superblock": true, 00:25:29.032 "num_base_bdevs": 2, 00:25:29.032 "num_base_bdevs_discovered": 1, 00:25:29.032 "num_base_bdevs_operational": 1, 00:25:29.032 "base_bdevs_list": [ 00:25:29.032 { 00:25:29.032 "name": null, 00:25:29.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.032 "is_configured": false, 00:25:29.032 "data_offset": 256, 00:25:29.032 "data_size": 7936 00:25:29.032 }, 00:25:29.032 { 00:25:29.032 "name": "BaseBdev2", 00:25:29.032 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:29.032 "is_configured": true, 00:25:29.032 "data_offset": 256, 00:25:29.032 "data_size": 7936 00:25:29.032 } 00:25:29.032 ] 00:25:29.032 }' 00:25:29.032 10:40:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:29.032 10:40:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:29.032 10:40:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:29.032 10:40:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:29.032 10:40:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:29.314 [2024-07-25 10:40:32.996049] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:29.314 [2024-07-25 10:40:33.003046] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb8ff70 00:25:29.314 [2024-07-25 10:40:33.004588] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:29.314 10:40:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:30.729 "name": "raid_bdev1", 00:25:30.729 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:30.729 "strip_size_kb": 0, 00:25:30.729 "state": "online", 00:25:30.729 "raid_level": "raid1", 00:25:30.729 "superblock": true, 00:25:30.729 "num_base_bdevs": 2, 00:25:30.729 "num_base_bdevs_discovered": 2, 00:25:30.729 "num_base_bdevs_operational": 2, 00:25:30.729 "process": { 00:25:30.729 "type": "rebuild", 00:25:30.729 "target": "spare", 00:25:30.729 "progress": { 00:25:30.729 "blocks": 3072, 00:25:30.729 "percent": 38 00:25:30.729 } 00:25:30.729 }, 00:25:30.729 "base_bdevs_list": [ 00:25:30.729 { 00:25:30.729 "name": "spare", 00:25:30.729 "uuid": "bb26fb63-7019-5846-a880-dce0a0b65c44", 00:25:30.729 "is_configured": true, 00:25:30.729 "data_offset": 256, 00:25:30.729 "data_size": 7936 00:25:30.729 }, 00:25:30.729 { 00:25:30.729 "name": "BaseBdev2", 00:25:30.729 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:30.729 "is_configured": true, 00:25:30.729 "data_offset": 256, 00:25:30.729 "data_size": 7936 00:25:30.729 } 00:25:30.729 ] 00:25:30.729 }' 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:25:30.729 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=998 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.729 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.987 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:30.987 "name": "raid_bdev1", 00:25:30.987 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:30.987 "strip_size_kb": 0, 00:25:30.987 "state": "online", 00:25:30.987 "raid_level": "raid1", 00:25:30.987 "superblock": true, 00:25:30.987 "num_base_bdevs": 2, 00:25:30.987 "num_base_bdevs_discovered": 2, 00:25:30.987 "num_base_bdevs_operational": 2, 00:25:30.987 "process": { 00:25:30.987 "type": "rebuild", 00:25:30.987 "target": "spare", 00:25:30.987 "progress": { 00:25:30.987 "blocks": 3840, 00:25:30.987 "percent": 48 00:25:30.987 } 00:25:30.987 }, 00:25:30.987 "base_bdevs_list": [ 00:25:30.987 { 00:25:30.987 "name": "spare", 00:25:30.987 "uuid": "bb26fb63-7019-5846-a880-dce0a0b65c44", 00:25:30.987 "is_configured": true, 00:25:30.987 "data_offset": 256, 00:25:30.987 "data_size": 7936 00:25:30.987 }, 00:25:30.987 { 00:25:30.987 "name": "BaseBdev2", 00:25:30.987 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:30.987 "is_configured": true, 00:25:30.987 "data_offset": 256, 00:25:30.987 "data_size": 7936 00:25:30.987 } 00:25:30.987 ] 00:25:30.987 }' 00:25:30.987 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:30.987 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:30.987 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:31.244 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:31.244 10:40:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:32.176 10:40:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:32.176 10:40:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:32.176 10:40:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:32.176 10:40:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:32.176 10:40:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:32.176 10:40:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:32.176 10:40:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.176 10:40:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.433 10:40:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:32.433 "name": "raid_bdev1", 00:25:32.433 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:32.433 "strip_size_kb": 0, 00:25:32.433 "state": "online", 00:25:32.433 "raid_level": "raid1", 00:25:32.433 "superblock": true, 00:25:32.433 "num_base_bdevs": 2, 00:25:32.433 "num_base_bdevs_discovered": 2, 00:25:32.433 "num_base_bdevs_operational": 2, 00:25:32.433 "process": { 00:25:32.433 "type": "rebuild", 00:25:32.433 "target": "spare", 00:25:32.433 "progress": { 00:25:32.433 "blocks": 7424, 00:25:32.433 "percent": 93 00:25:32.433 } 00:25:32.433 }, 00:25:32.433 "base_bdevs_list": [ 00:25:32.433 { 00:25:32.433 "name": "spare", 00:25:32.433 "uuid": "bb26fb63-7019-5846-a880-dce0a0b65c44", 00:25:32.433 "is_configured": true, 00:25:32.433 "data_offset": 256, 00:25:32.433 "data_size": 7936 00:25:32.433 }, 00:25:32.433 { 00:25:32.433 "name": "BaseBdev2", 00:25:32.433 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:32.433 "is_configured": true, 00:25:32.433 "data_offset": 256, 00:25:32.433 "data_size": 7936 00:25:32.433 } 00:25:32.433 ] 00:25:32.433 }' 00:25:32.433 10:40:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:32.433 10:40:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:32.433 10:40:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:32.433 10:40:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:32.433 10:40:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:32.433 [2024-07-25 10:40:36.130032] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:32.433 [2024-07-25 10:40:36.130099] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:32.433 [2024-07-25 10:40:36.130213] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.361 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:33.361 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:33.361 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:33.361 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:33.361 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:33.361 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:33.361 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.361 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.618 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:33.618 "name": "raid_bdev1", 00:25:33.618 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:33.618 "strip_size_kb": 0, 00:25:33.618 "state": "online", 00:25:33.618 "raid_level": "raid1", 00:25:33.618 "superblock": true, 00:25:33.618 "num_base_bdevs": 2, 00:25:33.618 "num_base_bdevs_discovered": 2, 00:25:33.618 "num_base_bdevs_operational": 2, 00:25:33.618 "base_bdevs_list": [ 00:25:33.618 { 00:25:33.618 "name": "spare", 00:25:33.618 "uuid": "bb26fb63-7019-5846-a880-dce0a0b65c44", 00:25:33.618 "is_configured": true, 00:25:33.618 "data_offset": 256, 00:25:33.618 "data_size": 7936 00:25:33.618 }, 00:25:33.618 { 00:25:33.618 "name": "BaseBdev2", 00:25:33.618 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:33.618 "is_configured": true, 00:25:33.618 "data_offset": 256, 00:25:33.618 "data_size": 7936 00:25:33.618 } 00:25:33.618 ] 00:25:33.618 }' 00:25:33.618 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:33.618 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:33.618 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:33.875 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:33.875 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:25:33.875 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:33.875 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:33.875 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:33.875 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:33.875 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:33.875 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.875 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:34.133 "name": "raid_bdev1", 00:25:34.133 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:34.133 "strip_size_kb": 0, 00:25:34.133 "state": "online", 00:25:34.133 "raid_level": "raid1", 00:25:34.133 "superblock": true, 00:25:34.133 "num_base_bdevs": 2, 00:25:34.133 "num_base_bdevs_discovered": 2, 00:25:34.133 "num_base_bdevs_operational": 2, 00:25:34.133 "base_bdevs_list": [ 00:25:34.133 { 00:25:34.133 "name": "spare", 00:25:34.133 "uuid": "bb26fb63-7019-5846-a880-dce0a0b65c44", 00:25:34.133 "is_configured": true, 00:25:34.133 "data_offset": 256, 00:25:34.133 "data_size": 7936 00:25:34.133 }, 00:25:34.133 { 00:25:34.133 "name": "BaseBdev2", 00:25:34.133 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:34.133 "is_configured": true, 00:25:34.133 "data_offset": 256, 00:25:34.133 "data_size": 7936 00:25:34.133 } 00:25:34.133 ] 00:25:34.133 }' 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.133 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.391 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:34.391 "name": "raid_bdev1", 00:25:34.391 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:34.391 "strip_size_kb": 0, 00:25:34.391 "state": "online", 00:25:34.391 "raid_level": "raid1", 00:25:34.391 "superblock": true, 00:25:34.391 "num_base_bdevs": 2, 00:25:34.391 "num_base_bdevs_discovered": 2, 00:25:34.391 "num_base_bdevs_operational": 2, 00:25:34.391 "base_bdevs_list": [ 00:25:34.391 { 00:25:34.391 "name": "spare", 00:25:34.391 "uuid": "bb26fb63-7019-5846-a880-dce0a0b65c44", 00:25:34.391 "is_configured": true, 00:25:34.391 "data_offset": 256, 00:25:34.391 "data_size": 7936 00:25:34.391 }, 00:25:34.391 { 00:25:34.391 "name": "BaseBdev2", 00:25:34.391 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:34.391 "is_configured": true, 00:25:34.391 "data_offset": 256, 00:25:34.391 "data_size": 7936 00:25:34.391 } 00:25:34.391 ] 00:25:34.391 }' 00:25:34.391 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:34.391 10:40:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:34.956 10:40:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:35.214 [2024-07-25 10:40:38.863723] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:35.214 [2024-07-25 10:40:38.863753] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:35.214 [2024-07-25 10:40:38.863842] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:35.214 [2024-07-25 10:40:38.863919] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:35.214 [2024-07-25 10:40:38.863935] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb91010 name raid_bdev1, state offline 00:25:35.214 10:40:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.214 10:40:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:25:35.472 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:25:35.472 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:25:35.472 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:25:35.472 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:35.472 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:35.472 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:35.472 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:35.472 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:35.472 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:35.472 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:25:35.472 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:35.472 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:35.472 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:35.730 /dev/nbd0 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:35.730 1+0 records in 00:25:35.730 1+0 records out 00:25:35.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252903 s, 16.2 MB/s 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:35.730 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:35.987 /dev/nbd1 00:25:35.987 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:35.987 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:35.988 1+0 records in 00:25:35.988 1+0 records out 00:25:35.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026563 s, 15.4 MB/s 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:35.988 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:36.245 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:36.245 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:36.245 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:36.245 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:36.245 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:25:36.245 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:36.245 10:40:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:36.502 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:36.502 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:36.502 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:36.502 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:36.502 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:36.502 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:36.502 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:36.502 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:36.502 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:36.502 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:36.759 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:36.759 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:36.759 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:36.759 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:36.759 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:36.759 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:36.759 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:36.759 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:36.759 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:25:36.759 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:37.016 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:37.273 [2024-07-25 10:40:40.877549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:37.273 [2024-07-25 10:40:40.877614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.273 [2024-07-25 10:40:40.877643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f71d0 00:25:37.273 [2024-07-25 10:40:40.877659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.273 [2024-07-25 10:40:40.879434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.273 [2024-07-25 10:40:40.879463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:37.273 [2024-07-25 10:40:40.879568] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:37.273 [2024-07-25 10:40:40.879607] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:37.273 [2024-07-25 10:40:40.879737] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:37.273 spare 00:25:37.273 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:37.273 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:37.273 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:37.273 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:37.273 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:37.273 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:37.273 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:37.273 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:37.273 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:37.273 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:37.273 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.273 10:40:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.273 [2024-07-25 10:40:40.980072] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x9ef610 00:25:37.273 [2024-07-25 10:40:40.980092] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:37.273 [2024-07-25 10:40:40.980281] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb8ff70 00:25:37.273 [2024-07-25 10:40:40.980453] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9ef610 00:25:37.273 [2024-07-25 10:40:40.980469] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9ef610 00:25:37.273 [2024-07-25 10:40:40.980582] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.531 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:37.531 "name": "raid_bdev1", 00:25:37.531 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:37.531 "strip_size_kb": 0, 00:25:37.531 "state": "online", 00:25:37.531 "raid_level": "raid1", 00:25:37.531 "superblock": true, 00:25:37.531 "num_base_bdevs": 2, 00:25:37.531 "num_base_bdevs_discovered": 2, 00:25:37.531 "num_base_bdevs_operational": 2, 00:25:37.531 "base_bdevs_list": [ 00:25:37.531 { 00:25:37.531 "name": "spare", 00:25:37.531 "uuid": "bb26fb63-7019-5846-a880-dce0a0b65c44", 00:25:37.531 "is_configured": true, 00:25:37.531 "data_offset": 256, 00:25:37.531 "data_size": 7936 00:25:37.531 }, 00:25:37.531 { 00:25:37.531 "name": "BaseBdev2", 00:25:37.531 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:37.531 "is_configured": true, 00:25:37.531 "data_offset": 256, 00:25:37.531 "data_size": 7936 00:25:37.531 } 00:25:37.531 ] 00:25:37.531 }' 00:25:37.531 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:37.531 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:38.095 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:38.095 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:38.095 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:38.095 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:38.095 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:38.095 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.095 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.352 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:38.352 "name": "raid_bdev1", 00:25:38.352 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:38.352 "strip_size_kb": 0, 00:25:38.352 "state": "online", 00:25:38.352 "raid_level": "raid1", 00:25:38.352 "superblock": true, 00:25:38.352 "num_base_bdevs": 2, 00:25:38.352 "num_base_bdevs_discovered": 2, 00:25:38.352 "num_base_bdevs_operational": 2, 00:25:38.352 "base_bdevs_list": [ 00:25:38.352 { 00:25:38.352 "name": "spare", 00:25:38.353 "uuid": "bb26fb63-7019-5846-a880-dce0a0b65c44", 00:25:38.353 "is_configured": true, 00:25:38.353 "data_offset": 256, 00:25:38.353 "data_size": 7936 00:25:38.353 }, 00:25:38.353 { 00:25:38.353 "name": "BaseBdev2", 00:25:38.353 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:38.353 "is_configured": true, 00:25:38.353 "data_offset": 256, 00:25:38.353 "data_size": 7936 00:25:38.353 } 00:25:38.353 ] 00:25:38.353 }' 00:25:38.353 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:38.353 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:38.353 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:38.353 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:38.353 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.353 10:40:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:38.610 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:25:38.610 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:38.868 [2024-07-25 10:40:42.505967] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:38.868 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:38.868 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:38.868 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:38.868 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:38.868 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:38.868 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:38.868 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:38.868 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:38.868 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:38.868 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:38.868 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.868 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.125 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:39.125 "name": "raid_bdev1", 00:25:39.125 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:39.125 "strip_size_kb": 0, 00:25:39.125 "state": "online", 00:25:39.125 "raid_level": "raid1", 00:25:39.125 "superblock": true, 00:25:39.125 "num_base_bdevs": 2, 00:25:39.125 "num_base_bdevs_discovered": 1, 00:25:39.125 "num_base_bdevs_operational": 1, 00:25:39.125 "base_bdevs_list": [ 00:25:39.125 { 00:25:39.125 "name": null, 00:25:39.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.126 "is_configured": false, 00:25:39.126 "data_offset": 256, 00:25:39.126 "data_size": 7936 00:25:39.126 }, 00:25:39.126 { 00:25:39.126 "name": "BaseBdev2", 00:25:39.126 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:39.126 "is_configured": true, 00:25:39.126 "data_offset": 256, 00:25:39.126 "data_size": 7936 00:25:39.126 } 00:25:39.126 ] 00:25:39.126 }' 00:25:39.126 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:39.126 10:40:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:39.690 10:40:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:39.948 [2024-07-25 10:40:43.568846] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:39.948 [2024-07-25 10:40:43.569053] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:39.948 [2024-07-25 10:40:43.569075] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:39.948 [2024-07-25 10:40:43.569116] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:39.948 [2024-07-25 10:40:43.575816] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb8ff70 00:25:39.948 [2024-07-25 10:40:43.578016] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:39.948 10:40:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:25:41.320 10:40:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:41.320 10:40:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:41.320 10:40:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:41.320 10:40:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:41.320 10:40:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:41.320 10:40:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.320 10:40:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.320 10:40:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:41.320 "name": "raid_bdev1", 00:25:41.320 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:41.320 "strip_size_kb": 0, 00:25:41.320 "state": "online", 00:25:41.320 "raid_level": "raid1", 00:25:41.320 "superblock": true, 00:25:41.320 "num_base_bdevs": 2, 00:25:41.320 "num_base_bdevs_discovered": 2, 00:25:41.320 "num_base_bdevs_operational": 2, 00:25:41.320 "process": { 00:25:41.320 "type": "rebuild", 00:25:41.320 "target": "spare", 00:25:41.320 "progress": { 00:25:41.320 "blocks": 3072, 00:25:41.320 "percent": 38 00:25:41.320 } 00:25:41.320 }, 00:25:41.320 "base_bdevs_list": [ 00:25:41.320 { 00:25:41.320 "name": "spare", 00:25:41.320 "uuid": "bb26fb63-7019-5846-a880-dce0a0b65c44", 00:25:41.320 "is_configured": true, 00:25:41.320 "data_offset": 256, 00:25:41.320 "data_size": 7936 00:25:41.320 }, 00:25:41.320 { 00:25:41.320 "name": "BaseBdev2", 00:25:41.320 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:41.320 "is_configured": true, 00:25:41.320 "data_offset": 256, 00:25:41.320 "data_size": 7936 00:25:41.320 } 00:25:41.320 ] 00:25:41.320 }' 00:25:41.320 10:40:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:41.320 10:40:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:41.320 10:40:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:41.320 10:40:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:41.320 10:40:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:41.578 [2024-07-25 10:40:45.156367] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:41.578 [2024-07-25 10:40:45.191403] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:41.578 [2024-07-25 10:40:45.191459] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.578 [2024-07-25 10:40:45.191480] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:41.578 [2024-07-25 10:40:45.191491] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:41.578 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:41.578 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:41.578 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:41.578 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:41.578 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:41.578 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:41.578 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:41.578 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:41.578 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:41.578 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:41.578 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.578 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.836 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:41.836 "name": "raid_bdev1", 00:25:41.836 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:41.836 "strip_size_kb": 0, 00:25:41.836 "state": "online", 00:25:41.836 "raid_level": "raid1", 00:25:41.836 "superblock": true, 00:25:41.836 "num_base_bdevs": 2, 00:25:41.836 "num_base_bdevs_discovered": 1, 00:25:41.836 "num_base_bdevs_operational": 1, 00:25:41.836 "base_bdevs_list": [ 00:25:41.836 { 00:25:41.836 "name": null, 00:25:41.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.836 "is_configured": false, 00:25:41.836 "data_offset": 256, 00:25:41.836 "data_size": 7936 00:25:41.836 }, 00:25:41.836 { 00:25:41.836 "name": "BaseBdev2", 00:25:41.836 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:41.836 "is_configured": true, 00:25:41.836 "data_offset": 256, 00:25:41.836 "data_size": 7936 00:25:41.836 } 00:25:41.836 ] 00:25:41.836 }' 00:25:41.836 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:41.836 10:40:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:42.401 10:40:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:42.659 [2024-07-25 10:40:46.319831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:42.659 [2024-07-25 10:40:46.319901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:42.659 [2024-07-25 10:40:46.319931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f0460 00:25:42.659 [2024-07-25 10:40:46.319947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:42.659 [2024-07-25 10:40:46.320438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:42.659 [2024-07-25 10:40:46.320465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:42.659 [2024-07-25 10:40:46.320558] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:42.659 [2024-07-25 10:40:46.320578] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:42.659 [2024-07-25 10:40:46.320589] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:42.659 [2024-07-25 10:40:46.320622] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:42.659 [2024-07-25 10:40:46.326892] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb913a0 00:25:42.659 spare 00:25:42.659 [2024-07-25 10:40:46.328454] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:42.659 10:40:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:25:44.031 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:44.031 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:44.031 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:44.031 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:44.031 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:44.031 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.031 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.031 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:44.031 "name": "raid_bdev1", 00:25:44.031 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:44.031 "strip_size_kb": 0, 00:25:44.031 "state": "online", 00:25:44.031 "raid_level": "raid1", 00:25:44.031 "superblock": true, 00:25:44.031 "num_base_bdevs": 2, 00:25:44.031 "num_base_bdevs_discovered": 2, 00:25:44.031 "num_base_bdevs_operational": 2, 00:25:44.031 "process": { 00:25:44.031 "type": "rebuild", 00:25:44.031 "target": "spare", 00:25:44.031 "progress": { 00:25:44.031 "blocks": 3072, 00:25:44.031 "percent": 38 00:25:44.031 } 00:25:44.031 }, 00:25:44.031 "base_bdevs_list": [ 00:25:44.031 { 00:25:44.031 "name": "spare", 00:25:44.031 "uuid": "bb26fb63-7019-5846-a880-dce0a0b65c44", 00:25:44.031 "is_configured": true, 00:25:44.031 "data_offset": 256, 00:25:44.031 "data_size": 7936 00:25:44.031 }, 00:25:44.031 { 00:25:44.031 "name": "BaseBdev2", 00:25:44.031 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:44.031 "is_configured": true, 00:25:44.031 "data_offset": 256, 00:25:44.031 "data_size": 7936 00:25:44.031 } 00:25:44.031 ] 00:25:44.031 }' 00:25:44.031 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:44.031 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:44.031 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:44.031 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:44.031 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:44.289 [2024-07-25 10:40:47.903512] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:44.289 [2024-07-25 10:40:47.942193] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:44.289 [2024-07-25 10:40:47.942248] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:44.289 [2024-07-25 10:40:47.942268] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:44.289 [2024-07-25 10:40:47.942279] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:44.289 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:44.289 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:44.289 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:44.289 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:44.289 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:44.289 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:44.289 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:44.289 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:44.289 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:44.289 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:44.289 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.289 10:40:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.548 10:40:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:44.548 "name": "raid_bdev1", 00:25:44.548 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:44.548 "strip_size_kb": 0, 00:25:44.548 "state": "online", 00:25:44.548 "raid_level": "raid1", 00:25:44.548 "superblock": true, 00:25:44.548 "num_base_bdevs": 2, 00:25:44.548 "num_base_bdevs_discovered": 1, 00:25:44.548 "num_base_bdevs_operational": 1, 00:25:44.548 "base_bdevs_list": [ 00:25:44.548 { 00:25:44.548 "name": null, 00:25:44.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.548 "is_configured": false, 00:25:44.548 "data_offset": 256, 00:25:44.548 "data_size": 7936 00:25:44.548 }, 00:25:44.548 { 00:25:44.548 "name": "BaseBdev2", 00:25:44.548 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:44.548 "is_configured": true, 00:25:44.548 "data_offset": 256, 00:25:44.548 "data_size": 7936 00:25:44.548 } 00:25:44.548 ] 00:25:44.548 }' 00:25:44.548 10:40:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:44.548 10:40:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:45.113 10:40:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:45.113 10:40:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:45.113 10:40:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:45.113 10:40:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:45.113 10:40:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:45.113 10:40:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.113 10:40:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.370 10:40:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:45.370 "name": "raid_bdev1", 00:25:45.370 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:45.370 "strip_size_kb": 0, 00:25:45.370 "state": "online", 00:25:45.370 "raid_level": "raid1", 00:25:45.370 "superblock": true, 00:25:45.370 "num_base_bdevs": 2, 00:25:45.370 "num_base_bdevs_discovered": 1, 00:25:45.370 "num_base_bdevs_operational": 1, 00:25:45.370 "base_bdevs_list": [ 00:25:45.370 { 00:25:45.370 "name": null, 00:25:45.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.370 "is_configured": false, 00:25:45.370 "data_offset": 256, 00:25:45.370 "data_size": 7936 00:25:45.370 }, 00:25:45.370 { 00:25:45.370 "name": "BaseBdev2", 00:25:45.370 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:45.370 "is_configured": true, 00:25:45.370 "data_offset": 256, 00:25:45.370 "data_size": 7936 00:25:45.370 } 00:25:45.370 ] 00:25:45.370 }' 00:25:45.370 10:40:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:45.370 10:40:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:45.370 10:40:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:45.628 10:40:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:45.628 10:40:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:45.886 10:40:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:46.143 [2024-07-25 10:40:49.640603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:46.143 [2024-07-25 10:40:49.640672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.143 [2024-07-25 10:40:49.640702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f6660 00:25:46.143 [2024-07-25 10:40:49.640726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.143 [2024-07-25 10:40:49.641184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.143 [2024-07-25 10:40:49.641210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:46.143 [2024-07-25 10:40:49.641303] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:46.143 [2024-07-25 10:40:49.641323] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:46.143 [2024-07-25 10:40:49.641332] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:46.143 BaseBdev1 00:25:46.143 10:40:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:25:47.077 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:47.077 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:47.077 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:47.077 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:47.077 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:47.077 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:47.077 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:47.077 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:47.077 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:47.077 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:47.077 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.077 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.335 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:47.335 "name": "raid_bdev1", 00:25:47.335 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:47.335 "strip_size_kb": 0, 00:25:47.335 "state": "online", 00:25:47.335 "raid_level": "raid1", 00:25:47.335 "superblock": true, 00:25:47.335 "num_base_bdevs": 2, 00:25:47.335 "num_base_bdevs_discovered": 1, 00:25:47.335 "num_base_bdevs_operational": 1, 00:25:47.335 "base_bdevs_list": [ 00:25:47.335 { 00:25:47.335 "name": null, 00:25:47.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.335 "is_configured": false, 00:25:47.335 "data_offset": 256, 00:25:47.335 "data_size": 7936 00:25:47.335 }, 00:25:47.335 { 00:25:47.335 "name": "BaseBdev2", 00:25:47.335 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:47.335 "is_configured": true, 00:25:47.335 "data_offset": 256, 00:25:47.335 "data_size": 7936 00:25:47.335 } 00:25:47.335 ] 00:25:47.335 }' 00:25:47.335 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:47.335 10:40:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:47.933 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:47.933 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:47.933 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:47.933 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:47.933 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:47.933 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.933 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:48.191 "name": "raid_bdev1", 00:25:48.191 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:48.191 "strip_size_kb": 0, 00:25:48.191 "state": "online", 00:25:48.191 "raid_level": "raid1", 00:25:48.191 "superblock": true, 00:25:48.191 "num_base_bdevs": 2, 00:25:48.191 "num_base_bdevs_discovered": 1, 00:25:48.191 "num_base_bdevs_operational": 1, 00:25:48.191 "base_bdevs_list": [ 00:25:48.191 { 00:25:48.191 "name": null, 00:25:48.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.191 "is_configured": false, 00:25:48.191 "data_offset": 256, 00:25:48.191 "data_size": 7936 00:25:48.191 }, 00:25:48.191 { 00:25:48.191 "name": "BaseBdev2", 00:25:48.191 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:48.191 "is_configured": true, 00:25:48.191 "data_offset": 256, 00:25:48.191 "data_size": 7936 00:25:48.191 } 00:25:48.191 ] 00:25:48.191 }' 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:48.191 10:40:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:48.448 [2024-07-25 10:40:52.038982] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:48.448 [2024-07-25 10:40:52.039205] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:48.448 [2024-07-25 10:40:52.039226] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:48.448 request: 00:25:48.448 { 00:25:48.448 "base_bdev": "BaseBdev1", 00:25:48.448 "raid_bdev": "raid_bdev1", 00:25:48.448 "method": "bdev_raid_add_base_bdev", 00:25:48.448 "req_id": 1 00:25:48.448 } 00:25:48.449 Got JSON-RPC error response 00:25:48.449 response: 00:25:48.449 { 00:25:48.449 "code": -22, 00:25:48.449 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:48.449 } 00:25:48.449 10:40:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:25:48.449 10:40:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:48.449 10:40:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:48.449 10:40:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:48.449 10:40:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:25:49.379 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:49.379 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:49.379 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:49.379 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:49.379 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:49.379 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:49.379 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:49.379 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:49.379 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:49.379 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:49.379 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.379 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.636 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:49.636 "name": "raid_bdev1", 00:25:49.636 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:49.636 "strip_size_kb": 0, 00:25:49.636 "state": "online", 00:25:49.636 "raid_level": "raid1", 00:25:49.636 "superblock": true, 00:25:49.636 "num_base_bdevs": 2, 00:25:49.636 "num_base_bdevs_discovered": 1, 00:25:49.636 "num_base_bdevs_operational": 1, 00:25:49.636 "base_bdevs_list": [ 00:25:49.636 { 00:25:49.636 "name": null, 00:25:49.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.636 "is_configured": false, 00:25:49.636 "data_offset": 256, 00:25:49.636 "data_size": 7936 00:25:49.636 }, 00:25:49.636 { 00:25:49.636 "name": "BaseBdev2", 00:25:49.636 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:49.636 "is_configured": true, 00:25:49.637 "data_offset": 256, 00:25:49.637 "data_size": 7936 00:25:49.637 } 00:25:49.637 ] 00:25:49.637 }' 00:25:49.637 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:49.637 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:50.199 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:50.199 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:50.199 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:50.199 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:50.199 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:50.199 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.199 10:40:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.456 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:50.456 "name": "raid_bdev1", 00:25:50.456 "uuid": "ac010cae-10c2-41ac-81d5-022c3c99d30a", 00:25:50.456 "strip_size_kb": 0, 00:25:50.456 "state": "online", 00:25:50.456 "raid_level": "raid1", 00:25:50.456 "superblock": true, 00:25:50.456 "num_base_bdevs": 2, 00:25:50.456 "num_base_bdevs_discovered": 1, 00:25:50.456 "num_base_bdevs_operational": 1, 00:25:50.456 "base_bdevs_list": [ 00:25:50.456 { 00:25:50.456 "name": null, 00:25:50.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.456 "is_configured": false, 00:25:50.456 "data_offset": 256, 00:25:50.456 "data_size": 7936 00:25:50.456 }, 00:25:50.456 { 00:25:50.456 "name": "BaseBdev2", 00:25:50.456 "uuid": "101c1d8f-fdc4-5ae7-b2d2-374cfe47386a", 00:25:50.456 "is_configured": true, 00:25:50.456 "data_offset": 256, 00:25:50.456 "data_size": 7936 00:25:50.456 } 00:25:50.456 ] 00:25:50.456 }' 00:25:50.456 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:50.456 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:50.456 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:50.714 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:50.714 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 2463893 00:25:50.714 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 2463893 ']' 00:25:50.714 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 2463893 00:25:50.714 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:25:50.714 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:50.714 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2463893 00:25:50.714 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:50.714 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:50.714 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2463893' 00:25:50.714 killing process with pid 2463893 00:25:50.714 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 2463893 00:25:50.714 Received shutdown signal, test time was about 60.000000 seconds 00:25:50.714 00:25:50.714 Latency(us) 00:25:50.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.714 =================================================================================================================== 00:25:50.714 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:50.714 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 2463893 00:25:50.714 [2024-07-25 10:40:54.208784] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:50.714 [2024-07-25 10:40:54.208907] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:50.714 [2024-07-25 10:40:54.208966] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:50.714 [2024-07-25 10:40:54.208981] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9ef610 name raid_bdev1, state offline 00:25:50.714 [2024-07-25 10:40:54.246341] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:50.972 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:25:50.972 00:25:50.972 real 0m31.464s 00:25:50.972 user 0m49.495s 00:25:50.972 sys 0m4.163s 00:25:50.972 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:50.972 10:40:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:50.972 ************************************ 00:25:50.972 END TEST raid_rebuild_test_sb_4k 00:25:50.972 ************************************ 00:25:50.972 10:40:54 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:25:50.972 10:40:54 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:25:50.972 10:40:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:50.972 10:40:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:50.972 10:40:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:50.972 ************************************ 00:25:50.972 START TEST raid_state_function_test_sb_md_separate 00:25:50.972 ************************************ 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=2467975 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2467975' 00:25:50.972 Process raid pid: 2467975 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 2467975 /var/tmp/spdk-raid.sock 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 2467975 ']' 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:50.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:50.972 10:40:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:50.972 [2024-07-25 10:40:54.632313] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:25:50.972 [2024-07-25 10:40:54.632401] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.230 [2024-07-25 10:40:54.708125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.230 [2024-07-25 10:40:54.818934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.230 [2024-07-25 10:40:54.895136] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:51.230 [2024-07-25 10:40:54.895171] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:52.161 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:52.161 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:25:52.161 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:52.418 [2024-07-25 10:40:55.898368] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:52.418 [2024-07-25 10:40:55.898419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:52.418 [2024-07-25 10:40:55.898432] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:52.418 [2024-07-25 10:40:55.898446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:52.418 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:52.418 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:52.418 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:52.418 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:52.418 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:52.418 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:52.418 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:52.419 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:52.419 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:52.419 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:52.419 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.419 10:40:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:52.676 10:40:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:52.676 "name": "Existed_Raid", 00:25:52.676 "uuid": "223f90d3-0f59-44c8-a88c-86f3257a6a5d", 00:25:52.676 "strip_size_kb": 0, 00:25:52.676 "state": "configuring", 00:25:52.676 "raid_level": "raid1", 00:25:52.676 "superblock": true, 00:25:52.676 "num_base_bdevs": 2, 00:25:52.676 "num_base_bdevs_discovered": 0, 00:25:52.676 "num_base_bdevs_operational": 2, 00:25:52.676 "base_bdevs_list": [ 00:25:52.676 { 00:25:52.676 "name": "BaseBdev1", 00:25:52.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.676 "is_configured": false, 00:25:52.676 "data_offset": 0, 00:25:52.676 "data_size": 0 00:25:52.676 }, 00:25:52.676 { 00:25:52.676 "name": "BaseBdev2", 00:25:52.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.676 "is_configured": false, 00:25:52.676 "data_offset": 0, 00:25:52.676 "data_size": 0 00:25:52.676 } 00:25:52.676 ] 00:25:52.676 }' 00:25:52.676 10:40:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:52.676 10:40:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:53.241 10:40:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:53.498 [2024-07-25 10:40:57.021213] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:53.498 [2024-07-25 10:40:57.021248] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19a7600 name Existed_Raid, state configuring 00:25:53.498 10:40:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:53.756 [2024-07-25 10:40:57.310036] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:53.756 [2024-07-25 10:40:57.310085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:53.756 [2024-07-25 10:40:57.310113] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:53.756 [2024-07-25 10:40:57.310131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:53.756 10:40:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:25:54.014 [2024-07-25 10:40:57.612302] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:54.014 BaseBdev1 00:25:54.014 10:40:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:54.014 10:40:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:54.014 10:40:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:54.014 10:40:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:25:54.014 10:40:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:54.014 10:40:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:54.014 10:40:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:54.271 10:40:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:54.528 [ 00:25:54.528 { 00:25:54.528 "name": "BaseBdev1", 00:25:54.528 "aliases": [ 00:25:54.528 "345a5f6b-99ce-4309-b469-77e139cad75c" 00:25:54.528 ], 00:25:54.528 "product_name": "Malloc disk", 00:25:54.528 "block_size": 4096, 00:25:54.528 "num_blocks": 8192, 00:25:54.528 "uuid": "345a5f6b-99ce-4309-b469-77e139cad75c", 00:25:54.528 "md_size": 32, 00:25:54.528 "md_interleave": false, 00:25:54.528 "dif_type": 0, 00:25:54.528 "assigned_rate_limits": { 00:25:54.528 "rw_ios_per_sec": 0, 00:25:54.528 "rw_mbytes_per_sec": 0, 00:25:54.528 "r_mbytes_per_sec": 0, 00:25:54.528 "w_mbytes_per_sec": 0 00:25:54.528 }, 00:25:54.528 "claimed": true, 00:25:54.528 "claim_type": "exclusive_write", 00:25:54.528 "zoned": false, 00:25:54.528 "supported_io_types": { 00:25:54.528 "read": true, 00:25:54.528 "write": true, 00:25:54.528 "unmap": true, 00:25:54.528 "flush": true, 00:25:54.528 "reset": true, 00:25:54.528 "nvme_admin": false, 00:25:54.528 "nvme_io": false, 00:25:54.528 "nvme_io_md": false, 00:25:54.528 "write_zeroes": true, 00:25:54.528 "zcopy": true, 00:25:54.528 "get_zone_info": false, 00:25:54.528 "zone_management": false, 00:25:54.528 "zone_append": false, 00:25:54.528 "compare": false, 00:25:54.528 "compare_and_write": false, 00:25:54.528 "abort": true, 00:25:54.528 "seek_hole": false, 00:25:54.528 "seek_data": false, 00:25:54.528 "copy": true, 00:25:54.528 "nvme_iov_md": false 00:25:54.528 }, 00:25:54.528 "memory_domains": [ 00:25:54.528 { 00:25:54.528 "dma_device_id": "system", 00:25:54.528 "dma_device_type": 1 00:25:54.528 }, 00:25:54.528 { 00:25:54.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.528 "dma_device_type": 2 00:25:54.528 } 00:25:54.528 ], 00:25:54.528 "driver_specific": {} 00:25:54.528 } 00:25:54.528 ] 00:25:54.528 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:25:54.528 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:54.529 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:54.529 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:54.529 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:54.529 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:54.529 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:54.529 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:54.529 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:54.529 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:54.529 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:54.529 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.529 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:54.786 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:54.786 "name": "Existed_Raid", 00:25:54.786 "uuid": "6a316a67-520c-483c-9377-bdd20d0253c0", 00:25:54.786 "strip_size_kb": 0, 00:25:54.786 "state": "configuring", 00:25:54.786 "raid_level": "raid1", 00:25:54.786 "superblock": true, 00:25:54.786 "num_base_bdevs": 2, 00:25:54.786 "num_base_bdevs_discovered": 1, 00:25:54.786 "num_base_bdevs_operational": 2, 00:25:54.786 "base_bdevs_list": [ 00:25:54.786 { 00:25:54.786 "name": "BaseBdev1", 00:25:54.786 "uuid": "345a5f6b-99ce-4309-b469-77e139cad75c", 00:25:54.786 "is_configured": true, 00:25:54.786 "data_offset": 256, 00:25:54.786 "data_size": 7936 00:25:54.786 }, 00:25:54.786 { 00:25:54.786 "name": "BaseBdev2", 00:25:54.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.786 "is_configured": false, 00:25:54.786 "data_offset": 0, 00:25:54.786 "data_size": 0 00:25:54.786 } 00:25:54.786 ] 00:25:54.786 }' 00:25:54.786 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:54.786 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:55.351 10:40:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:55.609 [2024-07-25 10:40:59.144506] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:55.609 [2024-07-25 10:40:59.144569] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19a6e50 name Existed_Raid, state configuring 00:25:55.609 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:55.866 [2024-07-25 10:40:59.401226] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:55.866 [2024-07-25 10:40:59.402791] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:55.866 [2024-07-25 10:40:59.402829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.866 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.124 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:56.124 "name": "Existed_Raid", 00:25:56.124 "uuid": "53cf1cbd-b235-4c41-961c-c44c5964499c", 00:25:56.124 "strip_size_kb": 0, 00:25:56.124 "state": "configuring", 00:25:56.124 "raid_level": "raid1", 00:25:56.124 "superblock": true, 00:25:56.124 "num_base_bdevs": 2, 00:25:56.124 "num_base_bdevs_discovered": 1, 00:25:56.124 "num_base_bdevs_operational": 2, 00:25:56.124 "base_bdevs_list": [ 00:25:56.124 { 00:25:56.124 "name": "BaseBdev1", 00:25:56.124 "uuid": "345a5f6b-99ce-4309-b469-77e139cad75c", 00:25:56.124 "is_configured": true, 00:25:56.124 "data_offset": 256, 00:25:56.124 "data_size": 7936 00:25:56.124 }, 00:25:56.124 { 00:25:56.124 "name": "BaseBdev2", 00:25:56.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.124 "is_configured": false, 00:25:56.124 "data_offset": 0, 00:25:56.124 "data_size": 0 00:25:56.124 } 00:25:56.124 ] 00:25:56.124 }' 00:25:56.124 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:56.124 10:40:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:56.690 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:25:56.948 [2024-07-25 10:41:00.442758] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:56.948 [2024-07-25 10:41:00.442948] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x19a8d00 00:25:56.948 [2024-07-25 10:41:00.442967] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:56.948 [2024-07-25 10:41:00.443032] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19a73f0 00:25:56.948 [2024-07-25 10:41:00.443173] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19a8d00 00:25:56.948 [2024-07-25 10:41:00.443191] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x19a8d00 00:25:56.948 [2024-07-25 10:41:00.443276] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:56.948 BaseBdev2 00:25:56.948 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:56.948 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:56.948 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:56.948 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:25:56.948 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:56.948 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:56.948 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:57.205 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:57.462 [ 00:25:57.462 { 00:25:57.462 "name": "BaseBdev2", 00:25:57.462 "aliases": [ 00:25:57.462 "282775e6-01fe-4c27-8cfd-4e2cd660163c" 00:25:57.462 ], 00:25:57.462 "product_name": "Malloc disk", 00:25:57.462 "block_size": 4096, 00:25:57.462 "num_blocks": 8192, 00:25:57.462 "uuid": "282775e6-01fe-4c27-8cfd-4e2cd660163c", 00:25:57.462 "md_size": 32, 00:25:57.462 "md_interleave": false, 00:25:57.462 "dif_type": 0, 00:25:57.462 "assigned_rate_limits": { 00:25:57.462 "rw_ios_per_sec": 0, 00:25:57.462 "rw_mbytes_per_sec": 0, 00:25:57.462 "r_mbytes_per_sec": 0, 00:25:57.462 "w_mbytes_per_sec": 0 00:25:57.462 }, 00:25:57.462 "claimed": true, 00:25:57.462 "claim_type": "exclusive_write", 00:25:57.462 "zoned": false, 00:25:57.462 "supported_io_types": { 00:25:57.462 "read": true, 00:25:57.462 "write": true, 00:25:57.462 "unmap": true, 00:25:57.462 "flush": true, 00:25:57.462 "reset": true, 00:25:57.462 "nvme_admin": false, 00:25:57.462 "nvme_io": false, 00:25:57.462 "nvme_io_md": false, 00:25:57.462 "write_zeroes": true, 00:25:57.462 "zcopy": true, 00:25:57.462 "get_zone_info": false, 00:25:57.462 "zone_management": false, 00:25:57.462 "zone_append": false, 00:25:57.462 "compare": false, 00:25:57.462 "compare_and_write": false, 00:25:57.462 "abort": true, 00:25:57.462 "seek_hole": false, 00:25:57.462 "seek_data": false, 00:25:57.462 "copy": true, 00:25:57.462 "nvme_iov_md": false 00:25:57.462 }, 00:25:57.462 "memory_domains": [ 00:25:57.462 { 00:25:57.462 "dma_device_id": "system", 00:25:57.462 "dma_device_type": 1 00:25:57.462 }, 00:25:57.462 { 00:25:57.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.462 "dma_device_type": 2 00:25:57.462 } 00:25:57.462 ], 00:25:57.462 "driver_specific": {} 00:25:57.462 } 00:25:57.462 ] 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.462 10:41:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.720 10:41:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.720 "name": "Existed_Raid", 00:25:57.720 "uuid": "53cf1cbd-b235-4c41-961c-c44c5964499c", 00:25:57.720 "strip_size_kb": 0, 00:25:57.720 "state": "online", 00:25:57.720 "raid_level": "raid1", 00:25:57.720 "superblock": true, 00:25:57.720 "num_base_bdevs": 2, 00:25:57.720 "num_base_bdevs_discovered": 2, 00:25:57.720 "num_base_bdevs_operational": 2, 00:25:57.720 "base_bdevs_list": [ 00:25:57.720 { 00:25:57.720 "name": "BaseBdev1", 00:25:57.720 "uuid": "345a5f6b-99ce-4309-b469-77e139cad75c", 00:25:57.720 "is_configured": true, 00:25:57.720 "data_offset": 256, 00:25:57.720 "data_size": 7936 00:25:57.720 }, 00:25:57.720 { 00:25:57.720 "name": "BaseBdev2", 00:25:57.720 "uuid": "282775e6-01fe-4c27-8cfd-4e2cd660163c", 00:25:57.720 "is_configured": true, 00:25:57.720 "data_offset": 256, 00:25:57.720 "data_size": 7936 00:25:57.720 } 00:25:57.720 ] 00:25:57.720 }' 00:25:57.720 10:41:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.720 10:41:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:58.284 10:41:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:58.284 10:41:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:58.284 10:41:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:58.284 10:41:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:58.284 10:41:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:58.284 10:41:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:25:58.284 10:41:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:58.284 10:41:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:58.542 [2024-07-25 10:41:02.003295] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:58.542 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:58.542 "name": "Existed_Raid", 00:25:58.542 "aliases": [ 00:25:58.542 "53cf1cbd-b235-4c41-961c-c44c5964499c" 00:25:58.542 ], 00:25:58.542 "product_name": "Raid Volume", 00:25:58.542 "block_size": 4096, 00:25:58.542 "num_blocks": 7936, 00:25:58.542 "uuid": "53cf1cbd-b235-4c41-961c-c44c5964499c", 00:25:58.542 "md_size": 32, 00:25:58.542 "md_interleave": false, 00:25:58.542 "dif_type": 0, 00:25:58.542 "assigned_rate_limits": { 00:25:58.542 "rw_ios_per_sec": 0, 00:25:58.542 "rw_mbytes_per_sec": 0, 00:25:58.542 "r_mbytes_per_sec": 0, 00:25:58.542 "w_mbytes_per_sec": 0 00:25:58.542 }, 00:25:58.542 "claimed": false, 00:25:58.542 "zoned": false, 00:25:58.542 "supported_io_types": { 00:25:58.542 "read": true, 00:25:58.542 "write": true, 00:25:58.542 "unmap": false, 00:25:58.542 "flush": false, 00:25:58.542 "reset": true, 00:25:58.542 "nvme_admin": false, 00:25:58.542 "nvme_io": false, 00:25:58.542 "nvme_io_md": false, 00:25:58.542 "write_zeroes": true, 00:25:58.542 "zcopy": false, 00:25:58.542 "get_zone_info": false, 00:25:58.542 "zone_management": false, 00:25:58.542 "zone_append": false, 00:25:58.542 "compare": false, 00:25:58.542 "compare_and_write": false, 00:25:58.542 "abort": false, 00:25:58.542 "seek_hole": false, 00:25:58.542 "seek_data": false, 00:25:58.542 "copy": false, 00:25:58.542 "nvme_iov_md": false 00:25:58.542 }, 00:25:58.542 "memory_domains": [ 00:25:58.542 { 00:25:58.542 "dma_device_id": "system", 00:25:58.542 "dma_device_type": 1 00:25:58.542 }, 00:25:58.542 { 00:25:58.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.542 "dma_device_type": 2 00:25:58.542 }, 00:25:58.542 { 00:25:58.542 "dma_device_id": "system", 00:25:58.542 "dma_device_type": 1 00:25:58.542 }, 00:25:58.542 { 00:25:58.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.542 "dma_device_type": 2 00:25:58.542 } 00:25:58.542 ], 00:25:58.542 "driver_specific": { 00:25:58.542 "raid": { 00:25:58.542 "uuid": "53cf1cbd-b235-4c41-961c-c44c5964499c", 00:25:58.542 "strip_size_kb": 0, 00:25:58.542 "state": "online", 00:25:58.542 "raid_level": "raid1", 00:25:58.542 "superblock": true, 00:25:58.542 "num_base_bdevs": 2, 00:25:58.542 "num_base_bdevs_discovered": 2, 00:25:58.542 "num_base_bdevs_operational": 2, 00:25:58.542 "base_bdevs_list": [ 00:25:58.542 { 00:25:58.542 "name": "BaseBdev1", 00:25:58.542 "uuid": "345a5f6b-99ce-4309-b469-77e139cad75c", 00:25:58.542 "is_configured": true, 00:25:58.542 "data_offset": 256, 00:25:58.543 "data_size": 7936 00:25:58.543 }, 00:25:58.543 { 00:25:58.543 "name": "BaseBdev2", 00:25:58.543 "uuid": "282775e6-01fe-4c27-8cfd-4e2cd660163c", 00:25:58.543 "is_configured": true, 00:25:58.543 "data_offset": 256, 00:25:58.543 "data_size": 7936 00:25:58.543 } 00:25:58.543 ] 00:25:58.543 } 00:25:58.543 } 00:25:58.543 }' 00:25:58.543 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:58.543 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:58.543 BaseBdev2' 00:25:58.543 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:58.543 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:58.543 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:58.800 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:58.800 "name": "BaseBdev1", 00:25:58.800 "aliases": [ 00:25:58.800 "345a5f6b-99ce-4309-b469-77e139cad75c" 00:25:58.800 ], 00:25:58.800 "product_name": "Malloc disk", 00:25:58.800 "block_size": 4096, 00:25:58.800 "num_blocks": 8192, 00:25:58.800 "uuid": "345a5f6b-99ce-4309-b469-77e139cad75c", 00:25:58.800 "md_size": 32, 00:25:58.800 "md_interleave": false, 00:25:58.800 "dif_type": 0, 00:25:58.800 "assigned_rate_limits": { 00:25:58.800 "rw_ios_per_sec": 0, 00:25:58.800 "rw_mbytes_per_sec": 0, 00:25:58.800 "r_mbytes_per_sec": 0, 00:25:58.800 "w_mbytes_per_sec": 0 00:25:58.800 }, 00:25:58.800 "claimed": true, 00:25:58.800 "claim_type": "exclusive_write", 00:25:58.800 "zoned": false, 00:25:58.800 "supported_io_types": { 00:25:58.800 "read": true, 00:25:58.800 "write": true, 00:25:58.800 "unmap": true, 00:25:58.800 "flush": true, 00:25:58.800 "reset": true, 00:25:58.800 "nvme_admin": false, 00:25:58.800 "nvme_io": false, 00:25:58.800 "nvme_io_md": false, 00:25:58.800 "write_zeroes": true, 00:25:58.800 "zcopy": true, 00:25:58.800 "get_zone_info": false, 00:25:58.800 "zone_management": false, 00:25:58.800 "zone_append": false, 00:25:58.800 "compare": false, 00:25:58.800 "compare_and_write": false, 00:25:58.800 "abort": true, 00:25:58.800 "seek_hole": false, 00:25:58.800 "seek_data": false, 00:25:58.800 "copy": true, 00:25:58.800 "nvme_iov_md": false 00:25:58.800 }, 00:25:58.800 "memory_domains": [ 00:25:58.800 { 00:25:58.800 "dma_device_id": "system", 00:25:58.800 "dma_device_type": 1 00:25:58.800 }, 00:25:58.800 { 00:25:58.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.800 "dma_device_type": 2 00:25:58.800 } 00:25:58.800 ], 00:25:58.800 "driver_specific": {} 00:25:58.800 }' 00:25:58.800 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:58.801 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:58.801 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:58.801 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:58.801 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:58.801 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:25:58.801 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:58.801 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:59.058 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:25:59.058 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:59.058 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:59.058 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:25:59.058 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:59.058 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:59.058 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:59.315 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:59.315 "name": "BaseBdev2", 00:25:59.315 "aliases": [ 00:25:59.315 "282775e6-01fe-4c27-8cfd-4e2cd660163c" 00:25:59.315 ], 00:25:59.315 "product_name": "Malloc disk", 00:25:59.315 "block_size": 4096, 00:25:59.315 "num_blocks": 8192, 00:25:59.315 "uuid": "282775e6-01fe-4c27-8cfd-4e2cd660163c", 00:25:59.315 "md_size": 32, 00:25:59.315 "md_interleave": false, 00:25:59.315 "dif_type": 0, 00:25:59.315 "assigned_rate_limits": { 00:25:59.315 "rw_ios_per_sec": 0, 00:25:59.315 "rw_mbytes_per_sec": 0, 00:25:59.315 "r_mbytes_per_sec": 0, 00:25:59.315 "w_mbytes_per_sec": 0 00:25:59.315 }, 00:25:59.315 "claimed": true, 00:25:59.315 "claim_type": "exclusive_write", 00:25:59.315 "zoned": false, 00:25:59.315 "supported_io_types": { 00:25:59.315 "read": true, 00:25:59.315 "write": true, 00:25:59.315 "unmap": true, 00:25:59.315 "flush": true, 00:25:59.315 "reset": true, 00:25:59.315 "nvme_admin": false, 00:25:59.315 "nvme_io": false, 00:25:59.315 "nvme_io_md": false, 00:25:59.315 "write_zeroes": true, 00:25:59.315 "zcopy": true, 00:25:59.315 "get_zone_info": false, 00:25:59.315 "zone_management": false, 00:25:59.315 "zone_append": false, 00:25:59.315 "compare": false, 00:25:59.315 "compare_and_write": false, 00:25:59.315 "abort": true, 00:25:59.315 "seek_hole": false, 00:25:59.315 "seek_data": false, 00:25:59.315 "copy": true, 00:25:59.315 "nvme_iov_md": false 00:25:59.315 }, 00:25:59.315 "memory_domains": [ 00:25:59.315 { 00:25:59.315 "dma_device_id": "system", 00:25:59.315 "dma_device_type": 1 00:25:59.315 }, 00:25:59.315 { 00:25:59.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.315 "dma_device_type": 2 00:25:59.315 } 00:25:59.315 ], 00:25:59.315 "driver_specific": {} 00:25:59.315 }' 00:25:59.315 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:59.315 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:59.315 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:59.315 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:59.315 10:41:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:59.315 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:25:59.315 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:59.573 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:59.573 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:25:59.573 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:59.573 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:59.573 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:25:59.573 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:59.833 [2024-07-25 10:41:03.382750] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.833 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.094 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:00.094 "name": "Existed_Raid", 00:26:00.094 "uuid": "53cf1cbd-b235-4c41-961c-c44c5964499c", 00:26:00.094 "strip_size_kb": 0, 00:26:00.094 "state": "online", 00:26:00.094 "raid_level": "raid1", 00:26:00.094 "superblock": true, 00:26:00.094 "num_base_bdevs": 2, 00:26:00.094 "num_base_bdevs_discovered": 1, 00:26:00.094 "num_base_bdevs_operational": 1, 00:26:00.094 "base_bdevs_list": [ 00:26:00.094 { 00:26:00.094 "name": null, 00:26:00.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.094 "is_configured": false, 00:26:00.094 "data_offset": 256, 00:26:00.094 "data_size": 7936 00:26:00.094 }, 00:26:00.094 { 00:26:00.094 "name": "BaseBdev2", 00:26:00.094 "uuid": "282775e6-01fe-4c27-8cfd-4e2cd660163c", 00:26:00.094 "is_configured": true, 00:26:00.094 "data_offset": 256, 00:26:00.094 "data_size": 7936 00:26:00.094 } 00:26:00.094 ] 00:26:00.094 }' 00:26:00.094 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:00.094 10:41:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:00.659 10:41:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:00.659 10:41:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:00.659 10:41:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.659 10:41:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:00.917 10:41:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:00.917 10:41:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:00.917 10:41:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:01.236 [2024-07-25 10:41:04.727990] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:01.237 [2024-07-25 10:41:04.728113] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:01.237 [2024-07-25 10:41:04.741737] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:01.237 [2024-07-25 10:41:04.741793] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:01.237 [2024-07-25 10:41:04.741807] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19a8d00 name Existed_Raid, state offline 00:26:01.237 10:41:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:01.237 10:41:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:01.237 10:41:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.237 10:41:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 2467975 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 2467975 ']' 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 2467975 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2467975 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2467975' 00:26:01.494 killing process with pid 2467975 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 2467975 00:26:01.494 [2024-07-25 10:41:05.077386] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:01.494 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 2467975 00:26:01.494 [2024-07-25 10:41:05.078615] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:01.753 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:26:01.753 00:26:01.753 real 0m10.793s 00:26:01.753 user 0m19.473s 00:26:01.753 sys 0m1.551s 00:26:01.753 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:01.753 10:41:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:01.753 ************************************ 00:26:01.753 END TEST raid_state_function_test_sb_md_separate 00:26:01.753 ************************************ 00:26:01.753 10:41:05 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:26:01.753 10:41:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:01.753 10:41:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:01.753 10:41:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:01.753 ************************************ 00:26:01.753 START TEST raid_superblock_test_md_separate 00:26:01.753 ************************************ 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=2469511 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 2469511 /var/tmp/spdk-raid.sock 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 2469511 ']' 00:26:01.753 10:41:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:01.754 10:41:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:01.754 10:41:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:01.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:01.754 10:41:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:01.754 10:41:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:02.012 [2024-07-25 10:41:05.476703] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:26:02.012 [2024-07-25 10:41:05.476787] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469511 ] 00:26:02.012 [2024-07-25 10:41:05.560962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.012 [2024-07-25 10:41:05.683769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.269 [2024-07-25 10:41:05.762450] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:02.269 [2024-07-25 10:41:05.762499] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:02.834 10:41:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:02.834 10:41:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:26:02.834 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:26:02.834 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:02.834 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:26:02.834 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:26:02.834 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:02.834 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:02.834 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:02.834 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:02.834 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:26:03.091 malloc1 00:26:03.091 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:03.349 [2024-07-25 10:41:06.932885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:03.349 [2024-07-25 10:41:06.932950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.349 [2024-07-25 10:41:06.932979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc40 00:26:03.349 [2024-07-25 10:41:06.932994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.349 [2024-07-25 10:41:06.934638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.349 [2024-07-25 10:41:06.934666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:03.349 pt1 00:26:03.349 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:03.349 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:03.349 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:26:03.349 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:26:03.349 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:03.349 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:03.349 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:03.349 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:03.349 10:41:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:26:03.607 malloc2 00:26:03.607 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:03.865 [2024-07-25 10:41:07.474980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:03.865 [2024-07-25 10:41:07.475056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.865 [2024-07-25 10:41:07.475083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9b91c0 00:26:03.865 [2024-07-25 10:41:07.475115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.865 [2024-07-25 10:41:07.476792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.865 [2024-07-25 10:41:07.476820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:03.865 pt2 00:26:03.865 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:03.865 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:03.865 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:26:04.123 [2024-07-25 10:41:07.723674] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:04.123 [2024-07-25 10:41:07.724938] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:04.123 [2024-07-25 10:41:07.725133] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x9ac400 00:26:04.123 [2024-07-25 10:41:07.725149] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:04.123 [2024-07-25 10:41:07.725234] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9b9d60 00:26:04.123 [2024-07-25 10:41:07.725366] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9ac400 00:26:04.123 [2024-07-25 10:41:07.725380] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9ac400 00:26:04.123 [2024-07-25 10:41:07.725483] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:04.123 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:04.123 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:04.123 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:04.123 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:04.123 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:04.123 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:04.123 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:04.123 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:04.123 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:04.123 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:04.123 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.123 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.398 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.398 "name": "raid_bdev1", 00:26:04.398 "uuid": "612d34ba-b07e-4063-97ae-7d4f0ef21ef5", 00:26:04.398 "strip_size_kb": 0, 00:26:04.398 "state": "online", 00:26:04.398 "raid_level": "raid1", 00:26:04.398 "superblock": true, 00:26:04.398 "num_base_bdevs": 2, 00:26:04.398 "num_base_bdevs_discovered": 2, 00:26:04.398 "num_base_bdevs_operational": 2, 00:26:04.398 "base_bdevs_list": [ 00:26:04.398 { 00:26:04.398 "name": "pt1", 00:26:04.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:04.398 "is_configured": true, 00:26:04.398 "data_offset": 256, 00:26:04.398 "data_size": 7936 00:26:04.398 }, 00:26:04.398 { 00:26:04.398 "name": "pt2", 00:26:04.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:04.398 "is_configured": true, 00:26:04.398 "data_offset": 256, 00:26:04.398 "data_size": 7936 00:26:04.398 } 00:26:04.398 ] 00:26:04.398 }' 00:26:04.398 10:41:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.398 10:41:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:04.980 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:26:04.980 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:04.980 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:04.980 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:04.980 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:04.980 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:26:04.980 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:04.980 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:05.237 [2024-07-25 10:41:08.754664] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:05.237 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:05.237 "name": "raid_bdev1", 00:26:05.237 "aliases": [ 00:26:05.237 "612d34ba-b07e-4063-97ae-7d4f0ef21ef5" 00:26:05.237 ], 00:26:05.237 "product_name": "Raid Volume", 00:26:05.237 "block_size": 4096, 00:26:05.237 "num_blocks": 7936, 00:26:05.237 "uuid": "612d34ba-b07e-4063-97ae-7d4f0ef21ef5", 00:26:05.237 "md_size": 32, 00:26:05.237 "md_interleave": false, 00:26:05.237 "dif_type": 0, 00:26:05.237 "assigned_rate_limits": { 00:26:05.237 "rw_ios_per_sec": 0, 00:26:05.237 "rw_mbytes_per_sec": 0, 00:26:05.237 "r_mbytes_per_sec": 0, 00:26:05.237 "w_mbytes_per_sec": 0 00:26:05.237 }, 00:26:05.237 "claimed": false, 00:26:05.237 "zoned": false, 00:26:05.237 "supported_io_types": { 00:26:05.237 "read": true, 00:26:05.237 "write": true, 00:26:05.237 "unmap": false, 00:26:05.237 "flush": false, 00:26:05.237 "reset": true, 00:26:05.237 "nvme_admin": false, 00:26:05.237 "nvme_io": false, 00:26:05.237 "nvme_io_md": false, 00:26:05.237 "write_zeroes": true, 00:26:05.237 "zcopy": false, 00:26:05.237 "get_zone_info": false, 00:26:05.237 "zone_management": false, 00:26:05.237 "zone_append": false, 00:26:05.237 "compare": false, 00:26:05.237 "compare_and_write": false, 00:26:05.237 "abort": false, 00:26:05.237 "seek_hole": false, 00:26:05.237 "seek_data": false, 00:26:05.237 "copy": false, 00:26:05.237 "nvme_iov_md": false 00:26:05.237 }, 00:26:05.237 "memory_domains": [ 00:26:05.237 { 00:26:05.237 "dma_device_id": "system", 00:26:05.237 "dma_device_type": 1 00:26:05.237 }, 00:26:05.237 { 00:26:05.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.237 "dma_device_type": 2 00:26:05.237 }, 00:26:05.237 { 00:26:05.237 "dma_device_id": "system", 00:26:05.237 "dma_device_type": 1 00:26:05.237 }, 00:26:05.237 { 00:26:05.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.237 "dma_device_type": 2 00:26:05.237 } 00:26:05.237 ], 00:26:05.237 "driver_specific": { 00:26:05.237 "raid": { 00:26:05.237 "uuid": "612d34ba-b07e-4063-97ae-7d4f0ef21ef5", 00:26:05.237 "strip_size_kb": 0, 00:26:05.237 "state": "online", 00:26:05.237 "raid_level": "raid1", 00:26:05.237 "superblock": true, 00:26:05.237 "num_base_bdevs": 2, 00:26:05.237 "num_base_bdevs_discovered": 2, 00:26:05.237 "num_base_bdevs_operational": 2, 00:26:05.237 "base_bdevs_list": [ 00:26:05.237 { 00:26:05.237 "name": "pt1", 00:26:05.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:05.237 "is_configured": true, 00:26:05.237 "data_offset": 256, 00:26:05.237 "data_size": 7936 00:26:05.237 }, 00:26:05.237 { 00:26:05.237 "name": "pt2", 00:26:05.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:05.237 "is_configured": true, 00:26:05.237 "data_offset": 256, 00:26:05.237 "data_size": 7936 00:26:05.237 } 00:26:05.237 ] 00:26:05.237 } 00:26:05.237 } 00:26:05.237 }' 00:26:05.237 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:05.237 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:05.237 pt2' 00:26:05.237 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:05.237 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:05.237 10:41:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:05.494 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:05.494 "name": "pt1", 00:26:05.494 "aliases": [ 00:26:05.494 "00000000-0000-0000-0000-000000000001" 00:26:05.494 ], 00:26:05.494 "product_name": "passthru", 00:26:05.494 "block_size": 4096, 00:26:05.494 "num_blocks": 8192, 00:26:05.494 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:05.494 "md_size": 32, 00:26:05.494 "md_interleave": false, 00:26:05.494 "dif_type": 0, 00:26:05.494 "assigned_rate_limits": { 00:26:05.494 "rw_ios_per_sec": 0, 00:26:05.494 "rw_mbytes_per_sec": 0, 00:26:05.494 "r_mbytes_per_sec": 0, 00:26:05.494 "w_mbytes_per_sec": 0 00:26:05.494 }, 00:26:05.494 "claimed": true, 00:26:05.494 "claim_type": "exclusive_write", 00:26:05.494 "zoned": false, 00:26:05.494 "supported_io_types": { 00:26:05.494 "read": true, 00:26:05.494 "write": true, 00:26:05.494 "unmap": true, 00:26:05.494 "flush": true, 00:26:05.494 "reset": true, 00:26:05.494 "nvme_admin": false, 00:26:05.494 "nvme_io": false, 00:26:05.494 "nvme_io_md": false, 00:26:05.494 "write_zeroes": true, 00:26:05.494 "zcopy": true, 00:26:05.494 "get_zone_info": false, 00:26:05.494 "zone_management": false, 00:26:05.494 "zone_append": false, 00:26:05.494 "compare": false, 00:26:05.494 "compare_and_write": false, 00:26:05.494 "abort": true, 00:26:05.494 "seek_hole": false, 00:26:05.494 "seek_data": false, 00:26:05.494 "copy": true, 00:26:05.494 "nvme_iov_md": false 00:26:05.494 }, 00:26:05.494 "memory_domains": [ 00:26:05.494 { 00:26:05.494 "dma_device_id": "system", 00:26:05.494 "dma_device_type": 1 00:26:05.494 }, 00:26:05.494 { 00:26:05.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.494 "dma_device_type": 2 00:26:05.494 } 00:26:05.494 ], 00:26:05.494 "driver_specific": { 00:26:05.494 "passthru": { 00:26:05.494 "name": "pt1", 00:26:05.494 "base_bdev_name": "malloc1" 00:26:05.494 } 00:26:05.494 } 00:26:05.494 }' 00:26:05.494 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:05.494 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:05.494 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:05.494 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:05.494 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:05.750 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:05.751 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:05.751 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:05.751 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:05.751 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:05.751 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:05.751 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:05.751 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:05.751 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:05.751 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:06.008 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:06.008 "name": "pt2", 00:26:06.008 "aliases": [ 00:26:06.008 "00000000-0000-0000-0000-000000000002" 00:26:06.008 ], 00:26:06.008 "product_name": "passthru", 00:26:06.008 "block_size": 4096, 00:26:06.008 "num_blocks": 8192, 00:26:06.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:06.008 "md_size": 32, 00:26:06.008 "md_interleave": false, 00:26:06.008 "dif_type": 0, 00:26:06.008 "assigned_rate_limits": { 00:26:06.008 "rw_ios_per_sec": 0, 00:26:06.008 "rw_mbytes_per_sec": 0, 00:26:06.008 "r_mbytes_per_sec": 0, 00:26:06.008 "w_mbytes_per_sec": 0 00:26:06.008 }, 00:26:06.008 "claimed": true, 00:26:06.008 "claim_type": "exclusive_write", 00:26:06.008 "zoned": false, 00:26:06.008 "supported_io_types": { 00:26:06.008 "read": true, 00:26:06.008 "write": true, 00:26:06.008 "unmap": true, 00:26:06.008 "flush": true, 00:26:06.008 "reset": true, 00:26:06.008 "nvme_admin": false, 00:26:06.008 "nvme_io": false, 00:26:06.008 "nvme_io_md": false, 00:26:06.008 "write_zeroes": true, 00:26:06.008 "zcopy": true, 00:26:06.008 "get_zone_info": false, 00:26:06.008 "zone_management": false, 00:26:06.008 "zone_append": false, 00:26:06.008 "compare": false, 00:26:06.008 "compare_and_write": false, 00:26:06.008 "abort": true, 00:26:06.008 "seek_hole": false, 00:26:06.008 "seek_data": false, 00:26:06.008 "copy": true, 00:26:06.008 "nvme_iov_md": false 00:26:06.008 }, 00:26:06.008 "memory_domains": [ 00:26:06.008 { 00:26:06.008 "dma_device_id": "system", 00:26:06.008 "dma_device_type": 1 00:26:06.008 }, 00:26:06.008 { 00:26:06.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.008 "dma_device_type": 2 00:26:06.008 } 00:26:06.008 ], 00:26:06.008 "driver_specific": { 00:26:06.008 "passthru": { 00:26:06.008 "name": "pt2", 00:26:06.008 "base_bdev_name": "malloc2" 00:26:06.008 } 00:26:06.008 } 00:26:06.008 }' 00:26:06.008 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:06.008 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:06.008 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:06.008 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:06.265 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:06.265 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:06.265 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:06.265 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:06.265 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:06.265 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:06.265 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:06.265 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:06.265 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:06.265 10:41:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:26:06.523 [2024-07-25 10:41:10.138511] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:06.523 10:41:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=612d34ba-b07e-4063-97ae-7d4f0ef21ef5 00:26:06.523 10:41:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 612d34ba-b07e-4063-97ae-7d4f0ef21ef5 ']' 00:26:06.523 10:41:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:06.780 [2024-07-25 10:41:10.378837] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:06.780 [2024-07-25 10:41:10.378861] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:06.780 [2024-07-25 10:41:10.378937] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:06.780 [2024-07-25 10:41:10.378995] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:06.780 [2024-07-25 10:41:10.379007] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9ac400 name raid_bdev1, state offline 00:26:06.780 10:41:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.780 10:41:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:26:07.037 10:41:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:26:07.037 10:41:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:26:07.037 10:41:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:07.037 10:41:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:07.294 10:41:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:07.294 10:41:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:07.551 10:41:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:07.551 10:41:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:07.809 10:41:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:26:07.809 10:41:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:07.809 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:26:07.809 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:07.809 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:07.809 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:07.809 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:07.809 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:07.809 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:07.809 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:07.809 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:07.809 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:07.809 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:08.067 [2024-07-25 10:41:11.742444] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:08.067 [2024-07-25 10:41:11.743726] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:08.067 [2024-07-25 10:41:11.743790] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:08.067 [2024-07-25 10:41:11.743858] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:08.067 [2024-07-25 10:41:11.743881] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:08.067 [2024-07-25 10:41:11.743891] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9ad280 name raid_bdev1, state configuring 00:26:08.067 request: 00:26:08.067 { 00:26:08.067 "name": "raid_bdev1", 00:26:08.067 "raid_level": "raid1", 00:26:08.067 "base_bdevs": [ 00:26:08.067 "malloc1", 00:26:08.067 "malloc2" 00:26:08.067 ], 00:26:08.067 "superblock": false, 00:26:08.067 "method": "bdev_raid_create", 00:26:08.067 "req_id": 1 00:26:08.067 } 00:26:08.067 Got JSON-RPC error response 00:26:08.067 response: 00:26:08.067 { 00:26:08.067 "code": -17, 00:26:08.067 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:08.067 } 00:26:08.067 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:26:08.067 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:08.067 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:08.067 10:41:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:08.067 10:41:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.067 10:41:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:26:08.632 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:26:08.632 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:26:08.632 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:08.632 [2024-07-25 10:41:12.331929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:08.632 [2024-07-25 10:41:12.332007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.632 [2024-07-25 10:41:12.332030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82be70 00:26:08.632 [2024-07-25 10:41:12.332043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.632 [2024-07-25 10:41:12.333514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.632 [2024-07-25 10:41:12.333536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:08.632 [2024-07-25 10:41:12.333609] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:08.632 [2024-07-25 10:41:12.333639] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:08.632 pt1 00:26:08.890 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:08.890 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:08.890 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:08.890 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:08.890 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:08.890 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:08.890 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:08.890 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:08.890 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:08.890 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:08.890 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.890 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.147 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:09.147 "name": "raid_bdev1", 00:26:09.147 "uuid": "612d34ba-b07e-4063-97ae-7d4f0ef21ef5", 00:26:09.147 "strip_size_kb": 0, 00:26:09.147 "state": "configuring", 00:26:09.147 "raid_level": "raid1", 00:26:09.147 "superblock": true, 00:26:09.147 "num_base_bdevs": 2, 00:26:09.147 "num_base_bdevs_discovered": 1, 00:26:09.147 "num_base_bdevs_operational": 2, 00:26:09.147 "base_bdevs_list": [ 00:26:09.147 { 00:26:09.147 "name": "pt1", 00:26:09.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:09.147 "is_configured": true, 00:26:09.147 "data_offset": 256, 00:26:09.147 "data_size": 7936 00:26:09.147 }, 00:26:09.147 { 00:26:09.147 "name": null, 00:26:09.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:09.147 "is_configured": false, 00:26:09.147 "data_offset": 256, 00:26:09.147 "data_size": 7936 00:26:09.147 } 00:26:09.147 ] 00:26:09.147 }' 00:26:09.147 10:41:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:09.147 10:41:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:09.762 [2024-07-25 10:41:13.402815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:09.762 [2024-07-25 10:41:13.402889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.762 [2024-07-25 10:41:13.402915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa30 00:26:09.762 [2024-07-25 10:41:13.402929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.762 [2024-07-25 10:41:13.403211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.762 [2024-07-25 10:41:13.403236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:09.762 [2024-07-25 10:41:13.403292] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:09.762 [2024-07-25 10:41:13.403319] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:09.762 [2024-07-25 10:41:13.403433] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x9ade10 00:26:09.762 [2024-07-25 10:41:13.403449] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:09.762 [2024-07-25 10:41:13.403512] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9af520 00:26:09.762 [2024-07-25 10:41:13.403637] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9ade10 00:26:09.762 [2024-07-25 10:41:13.403652] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9ade10 00:26:09.762 [2024-07-25 10:41:13.403735] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:09.762 pt2 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.762 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:10.019 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:10.019 "name": "raid_bdev1", 00:26:10.019 "uuid": "612d34ba-b07e-4063-97ae-7d4f0ef21ef5", 00:26:10.020 "strip_size_kb": 0, 00:26:10.020 "state": "online", 00:26:10.020 "raid_level": "raid1", 00:26:10.020 "superblock": true, 00:26:10.020 "num_base_bdevs": 2, 00:26:10.020 "num_base_bdevs_discovered": 2, 00:26:10.020 "num_base_bdevs_operational": 2, 00:26:10.020 "base_bdevs_list": [ 00:26:10.020 { 00:26:10.020 "name": "pt1", 00:26:10.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:10.020 "is_configured": true, 00:26:10.020 "data_offset": 256, 00:26:10.020 "data_size": 7936 00:26:10.020 }, 00:26:10.020 { 00:26:10.020 "name": "pt2", 00:26:10.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:10.020 "is_configured": true, 00:26:10.020 "data_offset": 256, 00:26:10.020 "data_size": 7936 00:26:10.020 } 00:26:10.020 ] 00:26:10.020 }' 00:26:10.020 10:41:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:10.020 10:41:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:10.584 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:26:10.584 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:10.584 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:10.584 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:10.584 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:10.584 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:26:10.584 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:10.584 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:10.842 [2024-07-25 10:41:14.481942] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:10.842 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:10.842 "name": "raid_bdev1", 00:26:10.842 "aliases": [ 00:26:10.842 "612d34ba-b07e-4063-97ae-7d4f0ef21ef5" 00:26:10.842 ], 00:26:10.842 "product_name": "Raid Volume", 00:26:10.842 "block_size": 4096, 00:26:10.842 "num_blocks": 7936, 00:26:10.842 "uuid": "612d34ba-b07e-4063-97ae-7d4f0ef21ef5", 00:26:10.842 "md_size": 32, 00:26:10.842 "md_interleave": false, 00:26:10.842 "dif_type": 0, 00:26:10.842 "assigned_rate_limits": { 00:26:10.842 "rw_ios_per_sec": 0, 00:26:10.842 "rw_mbytes_per_sec": 0, 00:26:10.842 "r_mbytes_per_sec": 0, 00:26:10.842 "w_mbytes_per_sec": 0 00:26:10.842 }, 00:26:10.842 "claimed": false, 00:26:10.842 "zoned": false, 00:26:10.842 "supported_io_types": { 00:26:10.842 "read": true, 00:26:10.842 "write": true, 00:26:10.842 "unmap": false, 00:26:10.842 "flush": false, 00:26:10.842 "reset": true, 00:26:10.842 "nvme_admin": false, 00:26:10.842 "nvme_io": false, 00:26:10.842 "nvme_io_md": false, 00:26:10.842 "write_zeroes": true, 00:26:10.842 "zcopy": false, 00:26:10.842 "get_zone_info": false, 00:26:10.842 "zone_management": false, 00:26:10.842 "zone_append": false, 00:26:10.842 "compare": false, 00:26:10.842 "compare_and_write": false, 00:26:10.842 "abort": false, 00:26:10.842 "seek_hole": false, 00:26:10.842 "seek_data": false, 00:26:10.842 "copy": false, 00:26:10.842 "nvme_iov_md": false 00:26:10.842 }, 00:26:10.842 "memory_domains": [ 00:26:10.842 { 00:26:10.842 "dma_device_id": "system", 00:26:10.842 "dma_device_type": 1 00:26:10.842 }, 00:26:10.842 { 00:26:10.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.842 "dma_device_type": 2 00:26:10.842 }, 00:26:10.842 { 00:26:10.842 "dma_device_id": "system", 00:26:10.842 "dma_device_type": 1 00:26:10.842 }, 00:26:10.842 { 00:26:10.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.842 "dma_device_type": 2 00:26:10.842 } 00:26:10.842 ], 00:26:10.842 "driver_specific": { 00:26:10.842 "raid": { 00:26:10.842 "uuid": "612d34ba-b07e-4063-97ae-7d4f0ef21ef5", 00:26:10.842 "strip_size_kb": 0, 00:26:10.842 "state": "online", 00:26:10.842 "raid_level": "raid1", 00:26:10.842 "superblock": true, 00:26:10.842 "num_base_bdevs": 2, 00:26:10.842 "num_base_bdevs_discovered": 2, 00:26:10.842 "num_base_bdevs_operational": 2, 00:26:10.842 "base_bdevs_list": [ 00:26:10.842 { 00:26:10.842 "name": "pt1", 00:26:10.842 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:10.842 "is_configured": true, 00:26:10.842 "data_offset": 256, 00:26:10.842 "data_size": 7936 00:26:10.842 }, 00:26:10.842 { 00:26:10.842 "name": "pt2", 00:26:10.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:10.842 "is_configured": true, 00:26:10.842 "data_offset": 256, 00:26:10.842 "data_size": 7936 00:26:10.842 } 00:26:10.842 ] 00:26:10.842 } 00:26:10.842 } 00:26:10.842 }' 00:26:10.842 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:10.842 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:10.842 pt2' 00:26:10.842 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:10.842 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:10.842 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:11.407 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:11.407 "name": "pt1", 00:26:11.407 "aliases": [ 00:26:11.407 "00000000-0000-0000-0000-000000000001" 00:26:11.407 ], 00:26:11.407 "product_name": "passthru", 00:26:11.407 "block_size": 4096, 00:26:11.407 "num_blocks": 8192, 00:26:11.407 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:11.407 "md_size": 32, 00:26:11.407 "md_interleave": false, 00:26:11.407 "dif_type": 0, 00:26:11.407 "assigned_rate_limits": { 00:26:11.407 "rw_ios_per_sec": 0, 00:26:11.407 "rw_mbytes_per_sec": 0, 00:26:11.407 "r_mbytes_per_sec": 0, 00:26:11.407 "w_mbytes_per_sec": 0 00:26:11.407 }, 00:26:11.407 "claimed": true, 00:26:11.407 "claim_type": "exclusive_write", 00:26:11.407 "zoned": false, 00:26:11.407 "supported_io_types": { 00:26:11.407 "read": true, 00:26:11.407 "write": true, 00:26:11.407 "unmap": true, 00:26:11.407 "flush": true, 00:26:11.407 "reset": true, 00:26:11.407 "nvme_admin": false, 00:26:11.407 "nvme_io": false, 00:26:11.407 "nvme_io_md": false, 00:26:11.407 "write_zeroes": true, 00:26:11.407 "zcopy": true, 00:26:11.407 "get_zone_info": false, 00:26:11.407 "zone_management": false, 00:26:11.407 "zone_append": false, 00:26:11.407 "compare": false, 00:26:11.407 "compare_and_write": false, 00:26:11.407 "abort": true, 00:26:11.407 "seek_hole": false, 00:26:11.407 "seek_data": false, 00:26:11.407 "copy": true, 00:26:11.407 "nvme_iov_md": false 00:26:11.407 }, 00:26:11.407 "memory_domains": [ 00:26:11.407 { 00:26:11.407 "dma_device_id": "system", 00:26:11.407 "dma_device_type": 1 00:26:11.407 }, 00:26:11.407 { 00:26:11.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:11.407 "dma_device_type": 2 00:26:11.407 } 00:26:11.407 ], 00:26:11.407 "driver_specific": { 00:26:11.407 "passthru": { 00:26:11.407 "name": "pt1", 00:26:11.407 "base_bdev_name": "malloc1" 00:26:11.407 } 00:26:11.407 } 00:26:11.407 }' 00:26:11.407 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:11.407 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:11.407 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:11.407 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:11.407 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:11.407 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:11.407 10:41:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:11.407 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:11.407 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:11.407 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:11.407 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:11.665 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:11.665 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:11.665 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:11.665 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:11.922 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:11.922 "name": "pt2", 00:26:11.922 "aliases": [ 00:26:11.922 "00000000-0000-0000-0000-000000000002" 00:26:11.922 ], 00:26:11.922 "product_name": "passthru", 00:26:11.922 "block_size": 4096, 00:26:11.922 "num_blocks": 8192, 00:26:11.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:11.922 "md_size": 32, 00:26:11.922 "md_interleave": false, 00:26:11.922 "dif_type": 0, 00:26:11.922 "assigned_rate_limits": { 00:26:11.922 "rw_ios_per_sec": 0, 00:26:11.922 "rw_mbytes_per_sec": 0, 00:26:11.922 "r_mbytes_per_sec": 0, 00:26:11.922 "w_mbytes_per_sec": 0 00:26:11.922 }, 00:26:11.922 "claimed": true, 00:26:11.922 "claim_type": "exclusive_write", 00:26:11.922 "zoned": false, 00:26:11.922 "supported_io_types": { 00:26:11.922 "read": true, 00:26:11.922 "write": true, 00:26:11.922 "unmap": true, 00:26:11.922 "flush": true, 00:26:11.922 "reset": true, 00:26:11.922 "nvme_admin": false, 00:26:11.922 "nvme_io": false, 00:26:11.922 "nvme_io_md": false, 00:26:11.922 "write_zeroes": true, 00:26:11.922 "zcopy": true, 00:26:11.922 "get_zone_info": false, 00:26:11.922 "zone_management": false, 00:26:11.922 "zone_append": false, 00:26:11.922 "compare": false, 00:26:11.922 "compare_and_write": false, 00:26:11.922 "abort": true, 00:26:11.922 "seek_hole": false, 00:26:11.922 "seek_data": false, 00:26:11.922 "copy": true, 00:26:11.922 "nvme_iov_md": false 00:26:11.922 }, 00:26:11.922 "memory_domains": [ 00:26:11.922 { 00:26:11.922 "dma_device_id": "system", 00:26:11.922 "dma_device_type": 1 00:26:11.922 }, 00:26:11.922 { 00:26:11.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:11.922 "dma_device_type": 2 00:26:11.922 } 00:26:11.922 ], 00:26:11.922 "driver_specific": { 00:26:11.922 "passthru": { 00:26:11.922 "name": "pt2", 00:26:11.922 "base_bdev_name": "malloc2" 00:26:11.922 } 00:26:11.922 } 00:26:11.922 }' 00:26:11.922 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:11.922 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:11.922 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:11.922 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:11.923 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:11.923 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:11.923 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:11.923 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:12.180 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:12.180 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:12.180 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:12.180 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:12.180 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:12.180 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:26:12.437 [2024-07-25 10:41:15.945862] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:12.437 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 612d34ba-b07e-4063-97ae-7d4f0ef21ef5 '!=' 612d34ba-b07e-4063-97ae-7d4f0ef21ef5 ']' 00:26:12.437 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:26:12.437 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:12.437 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:26:12.437 10:41:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:12.695 [2024-07-25 10:41:16.182316] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:12.695 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:12.695 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:12.695 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:12.695 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:12.695 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:12.695 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:12.695 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:12.695 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:12.695 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:12.695 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:12.695 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.695 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.953 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:12.953 "name": "raid_bdev1", 00:26:12.953 "uuid": "612d34ba-b07e-4063-97ae-7d4f0ef21ef5", 00:26:12.953 "strip_size_kb": 0, 00:26:12.953 "state": "online", 00:26:12.953 "raid_level": "raid1", 00:26:12.953 "superblock": true, 00:26:12.953 "num_base_bdevs": 2, 00:26:12.953 "num_base_bdevs_discovered": 1, 00:26:12.953 "num_base_bdevs_operational": 1, 00:26:12.953 "base_bdevs_list": [ 00:26:12.953 { 00:26:12.953 "name": null, 00:26:12.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.953 "is_configured": false, 00:26:12.953 "data_offset": 256, 00:26:12.953 "data_size": 7936 00:26:12.953 }, 00:26:12.953 { 00:26:12.953 "name": "pt2", 00:26:12.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:12.953 "is_configured": true, 00:26:12.953 "data_offset": 256, 00:26:12.953 "data_size": 7936 00:26:12.953 } 00:26:12.953 ] 00:26:12.953 }' 00:26:12.953 10:41:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:12.953 10:41:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:13.519 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:13.778 [2024-07-25 10:41:17.265163] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:13.778 [2024-07-25 10:41:17.265189] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:13.778 [2024-07-25 10:41:17.265264] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:13.778 [2024-07-25 10:41:17.265320] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:13.778 [2024-07-25 10:41:17.265332] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9ade10 name raid_bdev1, state offline 00:26:13.778 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.778 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:26:14.037 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:26:14.037 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:26:14.037 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:26:14.037 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:14.037 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:14.295 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:26:14.295 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:14.295 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:26:14.295 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:26:14.295 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:26:14.295 10:41:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:14.553 [2024-07-25 10:41:18.119433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:14.553 [2024-07-25 10:41:18.119515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.553 [2024-07-25 10:41:18.119554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9ae850 00:26:14.553 [2024-07-25 10:41:18.119567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.553 [2024-07-25 10:41:18.120990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.553 [2024-07-25 10:41:18.121013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:14.553 [2024-07-25 10:41:18.121100] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:14.553 [2024-07-25 10:41:18.121143] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:14.553 [2024-07-25 10:41:18.121236] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x9ad770 00:26:14.553 [2024-07-25 10:41:18.121250] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:14.553 [2024-07-25 10:41:18.121310] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9af290 00:26:14.553 [2024-07-25 10:41:18.121434] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9ad770 00:26:14.553 [2024-07-25 10:41:18.121469] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9ad770 00:26:14.553 [2024-07-25 10:41:18.121544] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.553 pt2 00:26:14.553 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:14.553 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:14.553 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:14.553 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:14.553 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:14.553 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:14.553 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:14.553 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:14.553 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:14.553 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:14.553 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.553 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.811 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.811 "name": "raid_bdev1", 00:26:14.811 "uuid": "612d34ba-b07e-4063-97ae-7d4f0ef21ef5", 00:26:14.811 "strip_size_kb": 0, 00:26:14.811 "state": "online", 00:26:14.811 "raid_level": "raid1", 00:26:14.811 "superblock": true, 00:26:14.811 "num_base_bdevs": 2, 00:26:14.811 "num_base_bdevs_discovered": 1, 00:26:14.811 "num_base_bdevs_operational": 1, 00:26:14.811 "base_bdevs_list": [ 00:26:14.811 { 00:26:14.811 "name": null, 00:26:14.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.811 "is_configured": false, 00:26:14.811 "data_offset": 256, 00:26:14.811 "data_size": 7936 00:26:14.811 }, 00:26:14.811 { 00:26:14.811 "name": "pt2", 00:26:14.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:14.811 "is_configured": true, 00:26:14.811 "data_offset": 256, 00:26:14.811 "data_size": 7936 00:26:14.811 } 00:26:14.811 ] 00:26:14.811 }' 00:26:14.811 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.811 10:41:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:15.377 10:41:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:15.636 [2024-07-25 10:41:19.250427] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:15.636 [2024-07-25 10:41:19.250457] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:15.636 [2024-07-25 10:41:19.250538] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:15.636 [2024-07-25 10:41:19.250599] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:15.636 [2024-07-25 10:41:19.250612] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9ad770 name raid_bdev1, state offline 00:26:15.636 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.636 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:26:15.894 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:26:15.894 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:26:15.894 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:26:15.894 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:16.152 [2024-07-25 10:41:19.751735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:16.152 [2024-07-25 10:41:19.751809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.152 [2024-07-25 10:41:19.751837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9ae090 00:26:16.152 [2024-07-25 10:41:19.751852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.152 [2024-07-25 10:41:19.753526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.152 [2024-07-25 10:41:19.753553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:16.152 [2024-07-25 10:41:19.753624] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:16.152 [2024-07-25 10:41:19.753662] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:16.152 [2024-07-25 10:41:19.753786] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:16.152 [2024-07-25 10:41:19.753805] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:16.152 [2024-07-25 10:41:19.753827] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9af5d0 name raid_bdev1, state configuring 00:26:16.152 [2024-07-25 10:41:19.753857] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:16.152 [2024-07-25 10:41:19.753935] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x9aedd0 00:26:16.152 [2024-07-25 10:41:19.753950] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:16.152 [2024-07-25 10:41:19.754015] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9ad9d0 00:26:16.152 [2024-07-25 10:41:19.754153] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9aedd0 00:26:16.152 [2024-07-25 10:41:19.754170] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9aedd0 00:26:16.152 [2024-07-25 10:41:19.754256] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.152 pt1 00:26:16.152 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:26:16.152 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:16.152 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:16.152 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:16.152 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:16.152 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:16.152 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:16.152 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:16.152 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:16.152 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:16.152 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:16.153 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.153 10:41:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.411 10:41:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:16.411 "name": "raid_bdev1", 00:26:16.411 "uuid": "612d34ba-b07e-4063-97ae-7d4f0ef21ef5", 00:26:16.411 "strip_size_kb": 0, 00:26:16.411 "state": "online", 00:26:16.411 "raid_level": "raid1", 00:26:16.411 "superblock": true, 00:26:16.411 "num_base_bdevs": 2, 00:26:16.411 "num_base_bdevs_discovered": 1, 00:26:16.411 "num_base_bdevs_operational": 1, 00:26:16.411 "base_bdevs_list": [ 00:26:16.411 { 00:26:16.411 "name": null, 00:26:16.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.411 "is_configured": false, 00:26:16.411 "data_offset": 256, 00:26:16.411 "data_size": 7936 00:26:16.411 }, 00:26:16.411 { 00:26:16.411 "name": "pt2", 00:26:16.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:16.411 "is_configured": true, 00:26:16.411 "data_offset": 256, 00:26:16.411 "data_size": 7936 00:26:16.411 } 00:26:16.411 ] 00:26:16.411 }' 00:26:16.411 10:41:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:16.411 10:41:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:16.976 10:41:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:16.976 10:41:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:17.234 10:41:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:26:17.234 10:41:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:17.234 10:41:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:26:17.492 [2024-07-25 10:41:21.143679] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:17.492 10:41:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 612d34ba-b07e-4063-97ae-7d4f0ef21ef5 '!=' 612d34ba-b07e-4063-97ae-7d4f0ef21ef5 ']' 00:26:17.492 10:41:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 2469511 00:26:17.492 10:41:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 2469511 ']' 00:26:17.492 10:41:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 2469511 00:26:17.492 10:41:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:26:17.492 10:41:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:17.492 10:41:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2469511 00:26:17.492 10:41:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:17.492 10:41:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:17.492 10:41:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2469511' 00:26:17.492 killing process with pid 2469511 00:26:17.492 10:41:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 2469511 00:26:17.492 [2024-07-25 10:41:21.186578] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:17.492 10:41:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 2469511 00:26:17.492 [2024-07-25 10:41:21.186666] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:17.492 [2024-07-25 10:41:21.186736] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:17.492 [2024-07-25 10:41:21.186751] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9aedd0 name raid_bdev1, state offline 00:26:17.749 [2024-07-25 10:41:21.218058] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:18.007 10:41:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:26:18.007 00:26:18.007 real 0m16.066s 00:26:18.007 user 0m29.600s 00:26:18.007 sys 0m2.274s 00:26:18.007 10:41:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:18.007 10:41:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:18.007 ************************************ 00:26:18.007 END TEST raid_superblock_test_md_separate 00:26:18.007 ************************************ 00:26:18.007 10:41:21 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:26:18.007 10:41:21 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:26:18.007 10:41:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:26:18.007 10:41:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:18.007 10:41:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:18.007 ************************************ 00:26:18.007 START TEST raid_rebuild_test_sb_md_separate 00:26:18.007 ************************************ 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:18.007 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=2471701 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 2471701 /var/tmp/spdk-raid.sock 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 2471701 ']' 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:18.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.008 10:41:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:18.008 [2024-07-25 10:41:21.599417] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:26:18.008 [2024-07-25 10:41:21.599488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471701 ] 00:26:18.008 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:18.008 Zero copy mechanism will not be used. 00:26:18.008 [2024-07-25 10:41:21.675746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.266 [2024-07-25 10:41:21.785552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.266 [2024-07-25 10:41:21.862779] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:18.266 [2024-07-25 10:41:21.862816] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:19.199 10:41:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.199 10:41:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:26:19.199 10:41:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:19.199 10:41:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:26:19.199 BaseBdev1_malloc 00:26:19.199 10:41:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:19.457 [2024-07-25 10:41:23.143660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:19.457 [2024-07-25 10:41:23.143718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:19.457 [2024-07-25 10:41:23.143749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc7edc0 00:26:19.457 [2024-07-25 10:41:23.143766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:19.457 [2024-07-25 10:41:23.145164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:19.457 [2024-07-25 10:41:23.145192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:19.457 BaseBdev1 00:26:19.457 10:41:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:19.457 10:41:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:26:20.022 BaseBdev2_malloc 00:26:20.022 10:41:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:20.022 [2024-07-25 10:41:23.664997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:20.022 [2024-07-25 10:41:23.665056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:20.022 [2024-07-25 10:41:23.665084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe0c340 00:26:20.022 [2024-07-25 10:41:23.665100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:20.022 [2024-07-25 10:41:23.666353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:20.022 [2024-07-25 10:41:23.666381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:20.022 BaseBdev2 00:26:20.022 10:41:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:26:20.279 spare_malloc 00:26:20.279 10:41:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:20.537 spare_delay 00:26:20.537 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:20.797 [2024-07-25 10:41:24.447314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:20.797 [2024-07-25 10:41:24.447369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:20.797 [2024-07-25 10:41:24.447395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe00670 00:26:20.797 [2024-07-25 10:41:24.447410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:20.797 [2024-07-25 10:41:24.448722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:20.797 [2024-07-25 10:41:24.448750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:20.797 spare 00:26:20.797 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:21.076 [2024-07-25 10:41:24.691998] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:21.076 [2024-07-25 10:41:24.693258] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:21.076 [2024-07-25 10:41:24.693443] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xe01bf0 00:26:21.076 [2024-07-25 10:41:24.693462] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:21.076 [2024-07-25 10:41:24.693535] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc75130 00:26:21.076 [2024-07-25 10:41:24.693682] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe01bf0 00:26:21.076 [2024-07-25 10:41:24.693698] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe01bf0 00:26:21.076 [2024-07-25 10:41:24.693777] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:21.076 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:21.076 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:21.076 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:21.076 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:21.076 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:21.076 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:21.076 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:21.076 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:21.076 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:21.076 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:21.076 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.076 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.343 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:21.343 "name": "raid_bdev1", 00:26:21.343 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:21.343 "strip_size_kb": 0, 00:26:21.343 "state": "online", 00:26:21.343 "raid_level": "raid1", 00:26:21.343 "superblock": true, 00:26:21.343 "num_base_bdevs": 2, 00:26:21.343 "num_base_bdevs_discovered": 2, 00:26:21.343 "num_base_bdevs_operational": 2, 00:26:21.343 "base_bdevs_list": [ 00:26:21.343 { 00:26:21.343 "name": "BaseBdev1", 00:26:21.343 "uuid": "6fc23120-918c-5f94-9ddb-fb39fda1ab83", 00:26:21.343 "is_configured": true, 00:26:21.343 "data_offset": 256, 00:26:21.343 "data_size": 7936 00:26:21.343 }, 00:26:21.343 { 00:26:21.343 "name": "BaseBdev2", 00:26:21.343 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:21.343 "is_configured": true, 00:26:21.343 "data_offset": 256, 00:26:21.343 "data_size": 7936 00:26:21.343 } 00:26:21.343 ] 00:26:21.343 }' 00:26:21.343 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:21.343 10:41:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:21.908 10:41:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:21.908 10:41:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:22.166 [2024-07-25 10:41:25.783191] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:22.166 10:41:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:26:22.166 10:41:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.166 10:41:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:22.424 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:22.682 [2024-07-25 10:41:26.328422] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc74ec0 00:26:22.682 /dev/nbd0 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:22.682 1+0 records in 00:26:22.682 1+0 records out 00:26:22.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187082 s, 21.9 MB/s 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:26:22.682 10:41:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:26:23.616 7936+0 records in 00:26:23.616 7936+0 records out 00:26:23.616 32505856 bytes (33 MB, 31 MiB) copied, 0.805359 s, 40.4 MB/s 00:26:23.616 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:23.617 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:23.617 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:23.617 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:23.617 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:26:23.617 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:23.617 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:23.888 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:23.888 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:23.888 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:23.888 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:23.888 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:23.888 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:23.888 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:26:23.888 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:26:23.888 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:23.888 [2024-07-25 10:41:27.473802] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:24.147 [2024-07-25 10:41:27.698029] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:24.147 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:24.147 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:24.147 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:24.147 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:24.147 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:24.147 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:24.147 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:24.147 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:24.147 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:24.147 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:24.147 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.147 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.405 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:24.405 "name": "raid_bdev1", 00:26:24.405 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:24.405 "strip_size_kb": 0, 00:26:24.405 "state": "online", 00:26:24.405 "raid_level": "raid1", 00:26:24.405 "superblock": true, 00:26:24.405 "num_base_bdevs": 2, 00:26:24.405 "num_base_bdevs_discovered": 1, 00:26:24.405 "num_base_bdevs_operational": 1, 00:26:24.405 "base_bdevs_list": [ 00:26:24.405 { 00:26:24.405 "name": null, 00:26:24.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.405 "is_configured": false, 00:26:24.405 "data_offset": 256, 00:26:24.405 "data_size": 7936 00:26:24.405 }, 00:26:24.405 { 00:26:24.405 "name": "BaseBdev2", 00:26:24.405 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:24.405 "is_configured": true, 00:26:24.405 "data_offset": 256, 00:26:24.405 "data_size": 7936 00:26:24.405 } 00:26:24.405 ] 00:26:24.405 }' 00:26:24.405 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:24.405 10:41:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:24.970 10:41:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:25.228 [2024-07-25 10:41:28.796987] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:25.228 [2024-07-25 10:41:28.800213] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc7d380 00:26:25.228 [2024-07-25 10:41:28.802291] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:25.228 10:41:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:26.162 10:41:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:26.162 10:41:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:26.162 10:41:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:26.162 10:41:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:26.162 10:41:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:26.162 10:41:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.162 10:41:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:26.420 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:26.420 "name": "raid_bdev1", 00:26:26.420 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:26.420 "strip_size_kb": 0, 00:26:26.420 "state": "online", 00:26:26.420 "raid_level": "raid1", 00:26:26.420 "superblock": true, 00:26:26.420 "num_base_bdevs": 2, 00:26:26.420 "num_base_bdevs_discovered": 2, 00:26:26.420 "num_base_bdevs_operational": 2, 00:26:26.420 "process": { 00:26:26.420 "type": "rebuild", 00:26:26.420 "target": "spare", 00:26:26.420 "progress": { 00:26:26.420 "blocks": 3072, 00:26:26.420 "percent": 38 00:26:26.420 } 00:26:26.420 }, 00:26:26.420 "base_bdevs_list": [ 00:26:26.420 { 00:26:26.420 "name": "spare", 00:26:26.420 "uuid": "52fd439b-7e84-5f88-b6d6-5af45ae7a646", 00:26:26.420 "is_configured": true, 00:26:26.420 "data_offset": 256, 00:26:26.420 "data_size": 7936 00:26:26.420 }, 00:26:26.420 { 00:26:26.420 "name": "BaseBdev2", 00:26:26.420 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:26.420 "is_configured": true, 00:26:26.420 "data_offset": 256, 00:26:26.420 "data_size": 7936 00:26:26.420 } 00:26:26.420 ] 00:26:26.420 }' 00:26:26.420 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:26.420 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:26.420 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:26.678 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:26.678 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:26.678 [2024-07-25 10:41:30.383442] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:26.937 [2024-07-25 10:41:30.416023] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:26.937 [2024-07-25 10:41:30.416080] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:26.937 [2024-07-25 10:41:30.416110] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:26.937 [2024-07-25 10:41:30.416123] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:26.937 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:26.937 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:26.937 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:26.937 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:26.937 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:26.937 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:26.937 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:26.937 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:26.937 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:26.937 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:26.937 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.937 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.195 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:27.195 "name": "raid_bdev1", 00:26:27.195 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:27.195 "strip_size_kb": 0, 00:26:27.195 "state": "online", 00:26:27.195 "raid_level": "raid1", 00:26:27.195 "superblock": true, 00:26:27.195 "num_base_bdevs": 2, 00:26:27.195 "num_base_bdevs_discovered": 1, 00:26:27.195 "num_base_bdevs_operational": 1, 00:26:27.195 "base_bdevs_list": [ 00:26:27.195 { 00:26:27.195 "name": null, 00:26:27.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.195 "is_configured": false, 00:26:27.195 "data_offset": 256, 00:26:27.195 "data_size": 7936 00:26:27.195 }, 00:26:27.195 { 00:26:27.195 "name": "BaseBdev2", 00:26:27.195 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:27.195 "is_configured": true, 00:26:27.195 "data_offset": 256, 00:26:27.195 "data_size": 7936 00:26:27.195 } 00:26:27.195 ] 00:26:27.195 }' 00:26:27.195 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:27.195 10:41:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:27.761 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:27.761 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:27.761 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:27.761 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:27.761 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:27.761 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.761 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.019 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:28.019 "name": "raid_bdev1", 00:26:28.019 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:28.019 "strip_size_kb": 0, 00:26:28.019 "state": "online", 00:26:28.019 "raid_level": "raid1", 00:26:28.019 "superblock": true, 00:26:28.019 "num_base_bdevs": 2, 00:26:28.019 "num_base_bdevs_discovered": 1, 00:26:28.019 "num_base_bdevs_operational": 1, 00:26:28.019 "base_bdevs_list": [ 00:26:28.019 { 00:26:28.019 "name": null, 00:26:28.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.019 "is_configured": false, 00:26:28.019 "data_offset": 256, 00:26:28.019 "data_size": 7936 00:26:28.019 }, 00:26:28.019 { 00:26:28.019 "name": "BaseBdev2", 00:26:28.019 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:28.019 "is_configured": true, 00:26:28.019 "data_offset": 256, 00:26:28.019 "data_size": 7936 00:26:28.019 } 00:26:28.019 ] 00:26:28.019 }' 00:26:28.019 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:28.019 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:28.019 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:28.019 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:28.019 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:28.278 [2024-07-25 10:41:31.908574] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:28.278 [2024-07-25 10:41:31.911885] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc7dd30 00:26:28.278 [2024-07-25 10:41:31.913374] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:28.278 10:41:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:29.651 10:41:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:29.651 10:41:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:29.651 10:41:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:29.651 10:41:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:29.651 10:41:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:29.651 10:41:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.651 10:41:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.651 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:29.651 "name": "raid_bdev1", 00:26:29.651 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:29.651 "strip_size_kb": 0, 00:26:29.651 "state": "online", 00:26:29.651 "raid_level": "raid1", 00:26:29.651 "superblock": true, 00:26:29.651 "num_base_bdevs": 2, 00:26:29.651 "num_base_bdevs_discovered": 2, 00:26:29.651 "num_base_bdevs_operational": 2, 00:26:29.651 "process": { 00:26:29.651 "type": "rebuild", 00:26:29.651 "target": "spare", 00:26:29.651 "progress": { 00:26:29.651 "blocks": 3072, 00:26:29.651 "percent": 38 00:26:29.651 } 00:26:29.651 }, 00:26:29.651 "base_bdevs_list": [ 00:26:29.651 { 00:26:29.651 "name": "spare", 00:26:29.651 "uuid": "52fd439b-7e84-5f88-b6d6-5af45ae7a646", 00:26:29.651 "is_configured": true, 00:26:29.651 "data_offset": 256, 00:26:29.651 "data_size": 7936 00:26:29.651 }, 00:26:29.651 { 00:26:29.651 "name": "BaseBdev2", 00:26:29.651 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:29.651 "is_configured": true, 00:26:29.651 "data_offset": 256, 00:26:29.651 "data_size": 7936 00:26:29.651 } 00:26:29.651 ] 00:26:29.651 }' 00:26:29.651 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:29.651 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:29.651 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:29.651 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:29.651 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:26:29.651 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:26:29.651 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:26:29.651 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:26:29.651 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:29.652 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:26:29.652 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1057 00:26:29.652 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:29.652 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:29.652 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:29.652 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:29.652 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:29.652 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:29.652 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.652 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.909 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:29.909 "name": "raid_bdev1", 00:26:29.909 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:29.909 "strip_size_kb": 0, 00:26:29.909 "state": "online", 00:26:29.909 "raid_level": "raid1", 00:26:29.910 "superblock": true, 00:26:29.910 "num_base_bdevs": 2, 00:26:29.910 "num_base_bdevs_discovered": 2, 00:26:29.910 "num_base_bdevs_operational": 2, 00:26:29.910 "process": { 00:26:29.910 "type": "rebuild", 00:26:29.910 "target": "spare", 00:26:29.910 "progress": { 00:26:29.910 "blocks": 3840, 00:26:29.910 "percent": 48 00:26:29.910 } 00:26:29.910 }, 00:26:29.910 "base_bdevs_list": [ 00:26:29.910 { 00:26:29.910 "name": "spare", 00:26:29.910 "uuid": "52fd439b-7e84-5f88-b6d6-5af45ae7a646", 00:26:29.910 "is_configured": true, 00:26:29.910 "data_offset": 256, 00:26:29.910 "data_size": 7936 00:26:29.910 }, 00:26:29.910 { 00:26:29.910 "name": "BaseBdev2", 00:26:29.910 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:29.910 "is_configured": true, 00:26:29.910 "data_offset": 256, 00:26:29.910 "data_size": 7936 00:26:29.910 } 00:26:29.910 ] 00:26:29.910 }' 00:26:29.910 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:29.910 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:29.910 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:29.910 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:29.910 10:41:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:31.279 "name": "raid_bdev1", 00:26:31.279 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:31.279 "strip_size_kb": 0, 00:26:31.279 "state": "online", 00:26:31.279 "raid_level": "raid1", 00:26:31.279 "superblock": true, 00:26:31.279 "num_base_bdevs": 2, 00:26:31.279 "num_base_bdevs_discovered": 2, 00:26:31.279 "num_base_bdevs_operational": 2, 00:26:31.279 "process": { 00:26:31.279 "type": "rebuild", 00:26:31.279 "target": "spare", 00:26:31.279 "progress": { 00:26:31.279 "blocks": 7168, 00:26:31.279 "percent": 90 00:26:31.279 } 00:26:31.279 }, 00:26:31.279 "base_bdevs_list": [ 00:26:31.279 { 00:26:31.279 "name": "spare", 00:26:31.279 "uuid": "52fd439b-7e84-5f88-b6d6-5af45ae7a646", 00:26:31.279 "is_configured": true, 00:26:31.279 "data_offset": 256, 00:26:31.279 "data_size": 7936 00:26:31.279 }, 00:26:31.279 { 00:26:31.279 "name": "BaseBdev2", 00:26:31.279 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:31.279 "is_configured": true, 00:26:31.279 "data_offset": 256, 00:26:31.279 "data_size": 7936 00:26:31.279 } 00:26:31.279 ] 00:26:31.279 }' 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:31.279 10:41:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:31.536 [2024-07-25 10:41:35.039632] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:31.536 [2024-07-25 10:41:35.039703] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:31.536 [2024-07-25 10:41:35.039808] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:32.467 10:41:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:32.467 10:41:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:32.467 10:41:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:32.467 10:41:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:32.467 10:41:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:32.467 10:41:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:32.467 10:41:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.467 10:41:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.467 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:32.467 "name": "raid_bdev1", 00:26:32.467 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:32.467 "strip_size_kb": 0, 00:26:32.467 "state": "online", 00:26:32.467 "raid_level": "raid1", 00:26:32.467 "superblock": true, 00:26:32.467 "num_base_bdevs": 2, 00:26:32.467 "num_base_bdevs_discovered": 2, 00:26:32.467 "num_base_bdevs_operational": 2, 00:26:32.467 "base_bdevs_list": [ 00:26:32.467 { 00:26:32.467 "name": "spare", 00:26:32.467 "uuid": "52fd439b-7e84-5f88-b6d6-5af45ae7a646", 00:26:32.467 "is_configured": true, 00:26:32.467 "data_offset": 256, 00:26:32.467 "data_size": 7936 00:26:32.467 }, 00:26:32.467 { 00:26:32.467 "name": "BaseBdev2", 00:26:32.467 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:32.467 "is_configured": true, 00:26:32.467 "data_offset": 256, 00:26:32.467 "data_size": 7936 00:26:32.467 } 00:26:32.467 ] 00:26:32.467 }' 00:26:32.467 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:32.724 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:32.724 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:32.724 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:32.724 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:26:32.724 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:32.724 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:32.724 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:32.724 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:32.724 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:32.724 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.724 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:32.982 "name": "raid_bdev1", 00:26:32.982 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:32.982 "strip_size_kb": 0, 00:26:32.982 "state": "online", 00:26:32.982 "raid_level": "raid1", 00:26:32.982 "superblock": true, 00:26:32.982 "num_base_bdevs": 2, 00:26:32.982 "num_base_bdevs_discovered": 2, 00:26:32.982 "num_base_bdevs_operational": 2, 00:26:32.982 "base_bdevs_list": [ 00:26:32.982 { 00:26:32.982 "name": "spare", 00:26:32.982 "uuid": "52fd439b-7e84-5f88-b6d6-5af45ae7a646", 00:26:32.982 "is_configured": true, 00:26:32.982 "data_offset": 256, 00:26:32.982 "data_size": 7936 00:26:32.982 }, 00:26:32.982 { 00:26:32.982 "name": "BaseBdev2", 00:26:32.982 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:32.982 "is_configured": true, 00:26:32.982 "data_offset": 256, 00:26:32.982 "data_size": 7936 00:26:32.982 } 00:26:32.982 ] 00:26:32.982 }' 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.982 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.239 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:33.239 "name": "raid_bdev1", 00:26:33.239 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:33.239 "strip_size_kb": 0, 00:26:33.239 "state": "online", 00:26:33.239 "raid_level": "raid1", 00:26:33.239 "superblock": true, 00:26:33.239 "num_base_bdevs": 2, 00:26:33.239 "num_base_bdevs_discovered": 2, 00:26:33.239 "num_base_bdevs_operational": 2, 00:26:33.239 "base_bdevs_list": [ 00:26:33.239 { 00:26:33.239 "name": "spare", 00:26:33.239 "uuid": "52fd439b-7e84-5f88-b6d6-5af45ae7a646", 00:26:33.239 "is_configured": true, 00:26:33.239 "data_offset": 256, 00:26:33.239 "data_size": 7936 00:26:33.239 }, 00:26:33.239 { 00:26:33.239 "name": "BaseBdev2", 00:26:33.239 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:33.239 "is_configured": true, 00:26:33.239 "data_offset": 256, 00:26:33.239 "data_size": 7936 00:26:33.239 } 00:26:33.239 ] 00:26:33.239 }' 00:26:33.239 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:33.240 10:41:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:33.805 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:34.062 [2024-07-25 10:41:37.714978] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:34.062 [2024-07-25 10:41:37.715013] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:34.062 [2024-07-25 10:41:37.715100] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:34.062 [2024-07-25 10:41:37.715194] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:34.062 [2024-07-25 10:41:37.715210] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe01bf0 name raid_bdev1, state offline 00:26:34.062 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.062 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:26:34.319 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:34.319 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:34.319 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:26:34.319 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:34.319 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:34.319 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:34.319 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:34.319 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:34.319 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:34.319 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:26:34.320 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:34.320 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:34.320 10:41:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:34.577 /dev/nbd0 00:26:34.577 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:34.577 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:34.577 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:34.577 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:26:34.577 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:34.577 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:34.577 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:34.577 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:26:34.577 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:34.577 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:34.577 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:34.835 1+0 records in 00:26:34.835 1+0 records out 00:26:34.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195771 s, 20.9 MB/s 00:26:34.835 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:34.835 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:26:34.835 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:34.835 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:34.835 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:26:34.835 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:34.835 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:34.835 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:35.094 /dev/nbd1 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:35.094 1+0 records in 00:26:35.094 1+0 records out 00:26:35.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260857 s, 15.7 MB/s 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:35.094 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:35.355 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:35.355 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:35.355 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:35.355 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:35.355 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:35.355 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:35.355 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:26:35.355 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:26:35.355 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:35.355 10:41:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:35.617 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:35.617 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:35.617 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:35.617 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:35.617 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:35.617 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:35.617 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:26:35.617 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:26:35.617 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:26:35.617 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:35.875 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:36.135 [2024-07-25 10:41:39.696383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:36.135 [2024-07-25 10:41:39.696441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.135 [2024-07-25 10:41:39.696469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc77700 00:26:36.135 [2024-07-25 10:41:39.696485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.135 [2024-07-25 10:41:39.698082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.135 [2024-07-25 10:41:39.698118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:36.135 [2024-07-25 10:41:39.698191] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:36.135 [2024-07-25 10:41:39.698230] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:36.135 [2024-07-25 10:41:39.698350] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:36.135 spare 00:26:36.135 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:36.135 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:36.135 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:36.135 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:36.135 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:36.135 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:36.135 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:36.135 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:36.135 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:36.135 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:36.135 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.135 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.135 [2024-07-25 10:41:39.798689] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xc77b50 00:26:36.135 [2024-07-25 10:41:39.798710] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:36.135 [2024-07-25 10:41:39.798783] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc78ca0 00:26:36.135 [2024-07-25 10:41:39.798926] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc77b50 00:26:36.135 [2024-07-25 10:41:39.798943] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc77b50 00:26:36.135 [2024-07-25 10:41:39.799033] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:36.393 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:36.393 "name": "raid_bdev1", 00:26:36.393 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:36.393 "strip_size_kb": 0, 00:26:36.393 "state": "online", 00:26:36.393 "raid_level": "raid1", 00:26:36.393 "superblock": true, 00:26:36.393 "num_base_bdevs": 2, 00:26:36.393 "num_base_bdevs_discovered": 2, 00:26:36.393 "num_base_bdevs_operational": 2, 00:26:36.393 "base_bdevs_list": [ 00:26:36.393 { 00:26:36.393 "name": "spare", 00:26:36.393 "uuid": "52fd439b-7e84-5f88-b6d6-5af45ae7a646", 00:26:36.393 "is_configured": true, 00:26:36.393 "data_offset": 256, 00:26:36.393 "data_size": 7936 00:26:36.393 }, 00:26:36.393 { 00:26:36.393 "name": "BaseBdev2", 00:26:36.393 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:36.393 "is_configured": true, 00:26:36.393 "data_offset": 256, 00:26:36.393 "data_size": 7936 00:26:36.393 } 00:26:36.393 ] 00:26:36.393 }' 00:26:36.393 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:36.393 10:41:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:36.958 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:36.958 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:36.958 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:36.958 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:36.958 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:36.958 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.958 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.216 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:37.216 "name": "raid_bdev1", 00:26:37.216 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:37.216 "strip_size_kb": 0, 00:26:37.216 "state": "online", 00:26:37.216 "raid_level": "raid1", 00:26:37.216 "superblock": true, 00:26:37.216 "num_base_bdevs": 2, 00:26:37.216 "num_base_bdevs_discovered": 2, 00:26:37.216 "num_base_bdevs_operational": 2, 00:26:37.216 "base_bdevs_list": [ 00:26:37.216 { 00:26:37.216 "name": "spare", 00:26:37.216 "uuid": "52fd439b-7e84-5f88-b6d6-5af45ae7a646", 00:26:37.216 "is_configured": true, 00:26:37.216 "data_offset": 256, 00:26:37.216 "data_size": 7936 00:26:37.216 }, 00:26:37.216 { 00:26:37.216 "name": "BaseBdev2", 00:26:37.216 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:37.216 "is_configured": true, 00:26:37.216 "data_offset": 256, 00:26:37.216 "data_size": 7936 00:26:37.216 } 00:26:37.216 ] 00:26:37.216 }' 00:26:37.216 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:37.216 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:37.216 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:37.216 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:37.216 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.216 10:41:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:37.782 [2024-07-25 10:41:41.421132] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.782 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.040 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:38.040 "name": "raid_bdev1", 00:26:38.040 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:38.040 "strip_size_kb": 0, 00:26:38.040 "state": "online", 00:26:38.040 "raid_level": "raid1", 00:26:38.040 "superblock": true, 00:26:38.040 "num_base_bdevs": 2, 00:26:38.040 "num_base_bdevs_discovered": 1, 00:26:38.040 "num_base_bdevs_operational": 1, 00:26:38.040 "base_bdevs_list": [ 00:26:38.040 { 00:26:38.040 "name": null, 00:26:38.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.040 "is_configured": false, 00:26:38.040 "data_offset": 256, 00:26:38.040 "data_size": 7936 00:26:38.040 }, 00:26:38.040 { 00:26:38.040 "name": "BaseBdev2", 00:26:38.040 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:38.040 "is_configured": true, 00:26:38.040 "data_offset": 256, 00:26:38.040 "data_size": 7936 00:26:38.040 } 00:26:38.040 ] 00:26:38.040 }' 00:26:38.040 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:38.040 10:41:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:38.637 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:38.895 [2024-07-25 10:41:42.459877] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:38.895 [2024-07-25 10:41:42.460087] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:38.895 [2024-07-25 10:41:42.460118] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:38.895 [2024-07-25 10:41:42.460154] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:38.895 [2024-07-25 10:41:42.462958] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc760b0 00:26:38.895 [2024-07-25 10:41:42.465012] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:38.895 10:41:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:26:39.826 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:39.826 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:39.826 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:39.826 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:39.826 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:39.826 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.826 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.084 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:40.084 "name": "raid_bdev1", 00:26:40.084 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:40.084 "strip_size_kb": 0, 00:26:40.084 "state": "online", 00:26:40.084 "raid_level": "raid1", 00:26:40.084 "superblock": true, 00:26:40.084 "num_base_bdevs": 2, 00:26:40.084 "num_base_bdevs_discovered": 2, 00:26:40.084 "num_base_bdevs_operational": 2, 00:26:40.084 "process": { 00:26:40.084 "type": "rebuild", 00:26:40.084 "target": "spare", 00:26:40.084 "progress": { 00:26:40.084 "blocks": 3072, 00:26:40.084 "percent": 38 00:26:40.084 } 00:26:40.084 }, 00:26:40.084 "base_bdevs_list": [ 00:26:40.084 { 00:26:40.084 "name": "spare", 00:26:40.084 "uuid": "52fd439b-7e84-5f88-b6d6-5af45ae7a646", 00:26:40.084 "is_configured": true, 00:26:40.084 "data_offset": 256, 00:26:40.084 "data_size": 7936 00:26:40.084 }, 00:26:40.084 { 00:26:40.084 "name": "BaseBdev2", 00:26:40.084 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:40.084 "is_configured": true, 00:26:40.084 "data_offset": 256, 00:26:40.084 "data_size": 7936 00:26:40.084 } 00:26:40.084 ] 00:26:40.084 }' 00:26:40.084 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:40.341 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:40.341 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:40.341 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:40.341 10:41:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:40.597 [2024-07-25 10:41:44.135293] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:40.597 [2024-07-25 10:41:44.179355] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:40.597 [2024-07-25 10:41:44.179410] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:40.597 [2024-07-25 10:41:44.179432] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:40.597 [2024-07-25 10:41:44.179443] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:40.597 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:40.597 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:40.597 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:40.597 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:40.597 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:40.597 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:40.597 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:40.597 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:40.597 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:40.597 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:40.597 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.597 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.884 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:40.884 "name": "raid_bdev1", 00:26:40.884 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:40.884 "strip_size_kb": 0, 00:26:40.884 "state": "online", 00:26:40.884 "raid_level": "raid1", 00:26:40.884 "superblock": true, 00:26:40.884 "num_base_bdevs": 2, 00:26:40.884 "num_base_bdevs_discovered": 1, 00:26:40.884 "num_base_bdevs_operational": 1, 00:26:40.884 "base_bdevs_list": [ 00:26:40.884 { 00:26:40.884 "name": null, 00:26:40.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.884 "is_configured": false, 00:26:40.884 "data_offset": 256, 00:26:40.884 "data_size": 7936 00:26:40.884 }, 00:26:40.884 { 00:26:40.884 "name": "BaseBdev2", 00:26:40.884 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:40.884 "is_configured": true, 00:26:40.884 "data_offset": 256, 00:26:40.884 "data_size": 7936 00:26:40.884 } 00:26:40.884 ] 00:26:40.884 }' 00:26:40.884 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:40.884 10:41:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:41.448 10:41:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:41.705 [2024-07-25 10:41:45.266640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:41.705 [2024-07-25 10:41:45.266710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:41.705 [2024-07-25 10:41:45.266740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc77370 00:26:41.705 [2024-07-25 10:41:45.266755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:41.705 [2024-07-25 10:41:45.267051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:41.705 [2024-07-25 10:41:45.267077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:41.705 [2024-07-25 10:41:45.267162] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:41.705 [2024-07-25 10:41:45.267182] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:41.705 [2024-07-25 10:41:45.267193] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:41.705 [2024-07-25 10:41:45.267217] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:41.705 [2024-07-25 10:41:45.270022] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc78ca0 00:26:41.705 [2024-07-25 10:41:45.271495] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:41.705 spare 00:26:41.705 10:41:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:26:42.637 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:42.637 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:42.637 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:42.637 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:42.637 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:42.637 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.637 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.895 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:42.895 "name": "raid_bdev1", 00:26:42.895 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:42.895 "strip_size_kb": 0, 00:26:42.895 "state": "online", 00:26:42.895 "raid_level": "raid1", 00:26:42.895 "superblock": true, 00:26:42.895 "num_base_bdevs": 2, 00:26:42.895 "num_base_bdevs_discovered": 2, 00:26:42.895 "num_base_bdevs_operational": 2, 00:26:42.895 "process": { 00:26:42.895 "type": "rebuild", 00:26:42.895 "target": "spare", 00:26:42.895 "progress": { 00:26:42.895 "blocks": 3072, 00:26:42.895 "percent": 38 00:26:42.895 } 00:26:42.895 }, 00:26:42.895 "base_bdevs_list": [ 00:26:42.895 { 00:26:42.895 "name": "spare", 00:26:42.895 "uuid": "52fd439b-7e84-5f88-b6d6-5af45ae7a646", 00:26:42.895 "is_configured": true, 00:26:42.895 "data_offset": 256, 00:26:42.895 "data_size": 7936 00:26:42.895 }, 00:26:42.895 { 00:26:42.895 "name": "BaseBdev2", 00:26:42.895 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:42.895 "is_configured": true, 00:26:42.895 "data_offset": 256, 00:26:42.895 "data_size": 7936 00:26:42.895 } 00:26:42.895 ] 00:26:42.895 }' 00:26:42.895 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:43.152 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:43.152 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:43.152 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:43.152 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:43.410 [2024-07-25 10:41:46.873146] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:43.410 [2024-07-25 10:41:46.885084] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:43.410 [2024-07-25 10:41:46.885150] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:43.410 [2024-07-25 10:41:46.885172] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:43.410 [2024-07-25 10:41:46.885183] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:43.410 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:43.410 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:43.410 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:43.410 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:43.410 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:43.410 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:43.410 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:43.410 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:43.410 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:43.410 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:43.410 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.410 10:41:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.667 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:43.667 "name": "raid_bdev1", 00:26:43.667 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:43.667 "strip_size_kb": 0, 00:26:43.667 "state": "online", 00:26:43.667 "raid_level": "raid1", 00:26:43.668 "superblock": true, 00:26:43.668 "num_base_bdevs": 2, 00:26:43.668 "num_base_bdevs_discovered": 1, 00:26:43.668 "num_base_bdevs_operational": 1, 00:26:43.668 "base_bdevs_list": [ 00:26:43.668 { 00:26:43.668 "name": null, 00:26:43.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.668 "is_configured": false, 00:26:43.668 "data_offset": 256, 00:26:43.668 "data_size": 7936 00:26:43.668 }, 00:26:43.668 { 00:26:43.668 "name": "BaseBdev2", 00:26:43.668 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:43.668 "is_configured": true, 00:26:43.668 "data_offset": 256, 00:26:43.668 "data_size": 7936 00:26:43.668 } 00:26:43.668 ] 00:26:43.668 }' 00:26:43.668 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:43.668 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:44.233 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:44.233 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:44.233 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:44.233 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:44.233 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:44.233 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.233 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.491 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:44.491 "name": "raid_bdev1", 00:26:44.491 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:44.491 "strip_size_kb": 0, 00:26:44.491 "state": "online", 00:26:44.491 "raid_level": "raid1", 00:26:44.491 "superblock": true, 00:26:44.491 "num_base_bdevs": 2, 00:26:44.491 "num_base_bdevs_discovered": 1, 00:26:44.491 "num_base_bdevs_operational": 1, 00:26:44.491 "base_bdevs_list": [ 00:26:44.491 { 00:26:44.491 "name": null, 00:26:44.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.491 "is_configured": false, 00:26:44.491 "data_offset": 256, 00:26:44.491 "data_size": 7936 00:26:44.491 }, 00:26:44.491 { 00:26:44.491 "name": "BaseBdev2", 00:26:44.491 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:44.491 "is_configured": true, 00:26:44.491 "data_offset": 256, 00:26:44.491 "data_size": 7936 00:26:44.491 } 00:26:44.491 ] 00:26:44.491 }' 00:26:44.491 10:41:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:44.491 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:44.491 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:44.491 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:44.491 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:44.749 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:45.006 [2024-07-25 10:41:48.589652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:45.006 [2024-07-25 10:41:48.589727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.006 [2024-07-25 10:41:48.589754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc7eff0 00:26:45.006 [2024-07-25 10:41:48.589767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.006 [2024-07-25 10:41:48.590020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.006 [2024-07-25 10:41:48.590048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:45.006 [2024-07-25 10:41:48.590128] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:45.006 [2024-07-25 10:41:48.590145] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:45.006 [2024-07-25 10:41:48.590154] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:45.006 BaseBdev1 00:26:45.006 10:41:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:26:45.939 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:45.939 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:45.939 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:45.939 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:45.939 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:45.939 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:45.939 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:45.939 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:45.939 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:45.939 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:45.939 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.939 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.197 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:46.197 "name": "raid_bdev1", 00:26:46.197 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:46.197 "strip_size_kb": 0, 00:26:46.197 "state": "online", 00:26:46.197 "raid_level": "raid1", 00:26:46.197 "superblock": true, 00:26:46.197 "num_base_bdevs": 2, 00:26:46.197 "num_base_bdevs_discovered": 1, 00:26:46.197 "num_base_bdevs_operational": 1, 00:26:46.197 "base_bdevs_list": [ 00:26:46.197 { 00:26:46.197 "name": null, 00:26:46.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.197 "is_configured": false, 00:26:46.197 "data_offset": 256, 00:26:46.197 "data_size": 7936 00:26:46.197 }, 00:26:46.197 { 00:26:46.197 "name": "BaseBdev2", 00:26:46.197 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:46.197 "is_configured": true, 00:26:46.197 "data_offset": 256, 00:26:46.197 "data_size": 7936 00:26:46.197 } 00:26:46.197 ] 00:26:46.197 }' 00:26:46.197 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:46.197 10:41:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:46.763 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:46.763 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:46.763 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:46.763 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:46.763 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:46.763 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.763 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.021 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:47.021 "name": "raid_bdev1", 00:26:47.021 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:47.021 "strip_size_kb": 0, 00:26:47.021 "state": "online", 00:26:47.021 "raid_level": "raid1", 00:26:47.021 "superblock": true, 00:26:47.021 "num_base_bdevs": 2, 00:26:47.021 "num_base_bdevs_discovered": 1, 00:26:47.021 "num_base_bdevs_operational": 1, 00:26:47.021 "base_bdevs_list": [ 00:26:47.021 { 00:26:47.021 "name": null, 00:26:47.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.021 "is_configured": false, 00:26:47.021 "data_offset": 256, 00:26:47.021 "data_size": 7936 00:26:47.021 }, 00:26:47.021 { 00:26:47.021 "name": "BaseBdev2", 00:26:47.021 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:47.021 "is_configured": true, 00:26:47.021 "data_offset": 256, 00:26:47.021 "data_size": 7936 00:26:47.021 } 00:26:47.021 ] 00:26:47.021 }' 00:26:47.021 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:47.278 10:41:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:47.535 [2024-07-25 10:41:51.084315] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:47.535 [2024-07-25 10:41:51.084524] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:47.535 [2024-07-25 10:41:51.084546] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:47.535 request: 00:26:47.535 { 00:26:47.535 "base_bdev": "BaseBdev1", 00:26:47.535 "raid_bdev": "raid_bdev1", 00:26:47.535 "method": "bdev_raid_add_base_bdev", 00:26:47.535 "req_id": 1 00:26:47.535 } 00:26:47.535 Got JSON-RPC error response 00:26:47.535 response: 00:26:47.535 { 00:26:47.535 "code": -22, 00:26:47.535 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:47.535 } 00:26:47.535 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:26:47.535 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:47.535 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:47.535 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:47.535 10:41:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:26:48.467 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:48.467 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:48.467 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:48.467 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:48.467 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:48.467 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:48.467 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:48.467 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:48.467 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:48.467 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:48.467 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.467 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.725 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:48.725 "name": "raid_bdev1", 00:26:48.725 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:48.725 "strip_size_kb": 0, 00:26:48.725 "state": "online", 00:26:48.725 "raid_level": "raid1", 00:26:48.725 "superblock": true, 00:26:48.725 "num_base_bdevs": 2, 00:26:48.725 "num_base_bdevs_discovered": 1, 00:26:48.725 "num_base_bdevs_operational": 1, 00:26:48.725 "base_bdevs_list": [ 00:26:48.725 { 00:26:48.725 "name": null, 00:26:48.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.725 "is_configured": false, 00:26:48.725 "data_offset": 256, 00:26:48.725 "data_size": 7936 00:26:48.725 }, 00:26:48.725 { 00:26:48.725 "name": "BaseBdev2", 00:26:48.725 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:48.725 "is_configured": true, 00:26:48.725 "data_offset": 256, 00:26:48.725 "data_size": 7936 00:26:48.725 } 00:26:48.725 ] 00:26:48.725 }' 00:26:48.725 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:48.725 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:49.291 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:49.291 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:49.291 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:49.291 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:49.291 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:49.291 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.291 10:41:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:49.549 "name": "raid_bdev1", 00:26:49.549 "uuid": "25f49233-5cf5-4b93-b827-dd88a067c529", 00:26:49.549 "strip_size_kb": 0, 00:26:49.549 "state": "online", 00:26:49.549 "raid_level": "raid1", 00:26:49.549 "superblock": true, 00:26:49.549 "num_base_bdevs": 2, 00:26:49.549 "num_base_bdevs_discovered": 1, 00:26:49.549 "num_base_bdevs_operational": 1, 00:26:49.549 "base_bdevs_list": [ 00:26:49.549 { 00:26:49.549 "name": null, 00:26:49.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.549 "is_configured": false, 00:26:49.549 "data_offset": 256, 00:26:49.549 "data_size": 7936 00:26:49.549 }, 00:26:49.549 { 00:26:49.549 "name": "BaseBdev2", 00:26:49.549 "uuid": "44c22496-a259-54e7-bc75-5937f5d0be46", 00:26:49.549 "is_configured": true, 00:26:49.549 "data_offset": 256, 00:26:49.549 "data_size": 7936 00:26:49.549 } 00:26:49.549 ] 00:26:49.549 }' 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 2471701 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 2471701 ']' 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 2471701 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2471701 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2471701' 00:26:49.549 killing process with pid 2471701 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 2471701 00:26:49.549 Received shutdown signal, test time was about 60.000000 seconds 00:26:49.549 00:26:49.549 Latency(us) 00:26:49.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.549 =================================================================================================================== 00:26:49.549 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:49.549 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 2471701 00:26:49.549 [2024-07-25 10:41:53.254022] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:49.549 [2024-07-25 10:41:53.254210] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:49.549 [2024-07-25 10:41:53.254267] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:49.549 [2024-07-25 10:41:53.254281] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc77b50 name raid_bdev1, state offline 00:26:49.808 [2024-07-25 10:41:53.300408] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:50.065 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:26:50.065 00:26:50.065 real 0m32.044s 00:26:50.065 user 0m50.637s 00:26:50.065 sys 0m4.049s 00:26:50.065 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:50.065 10:41:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:50.065 ************************************ 00:26:50.065 END TEST raid_rebuild_test_sb_md_separate 00:26:50.065 ************************************ 00:26:50.065 10:41:53 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:26:50.065 10:41:53 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:26:50.065 10:41:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:50.065 10:41:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:50.065 10:41:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:50.065 ************************************ 00:26:50.065 START TEST raid_state_function_test_sb_md_interleaved 00:26:50.065 ************************************ 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=2475986 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2475986' 00:26:50.065 Process raid pid: 2475986 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 2475986 /var/tmp/spdk-raid.sock 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 2475986 ']' 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:50.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:50.065 10:41:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:50.065 [2024-07-25 10:41:53.695916] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:26:50.065 [2024-07-25 10:41:53.695982] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.065 [2024-07-25 10:41:53.772856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.323 [2024-07-25 10:41:53.881224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.323 [2024-07-25 10:41:53.953020] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:50.323 [2024-07-25 10:41:53.953062] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:51.255 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:51.255 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:26:51.255 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:51.255 [2024-07-25 10:41:54.955177] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:51.255 [2024-07-25 10:41:54.955226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:51.255 [2024-07-25 10:41:54.955239] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:51.255 [2024-07-25 10:41:54.955252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:51.513 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:51.513 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:51.513 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:51.513 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:51.513 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:51.513 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:51.513 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.513 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.513 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.513 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.513 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.513 10:41:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:51.770 10:41:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:51.770 "name": "Existed_Raid", 00:26:51.770 "uuid": "7b555878-1faa-49d2-8cb3-a22ce61331fb", 00:26:51.770 "strip_size_kb": 0, 00:26:51.770 "state": "configuring", 00:26:51.770 "raid_level": "raid1", 00:26:51.770 "superblock": true, 00:26:51.770 "num_base_bdevs": 2, 00:26:51.770 "num_base_bdevs_discovered": 0, 00:26:51.770 "num_base_bdevs_operational": 2, 00:26:51.770 "base_bdevs_list": [ 00:26:51.770 { 00:26:51.770 "name": "BaseBdev1", 00:26:51.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.770 "is_configured": false, 00:26:51.771 "data_offset": 0, 00:26:51.771 "data_size": 0 00:26:51.771 }, 00:26:51.771 { 00:26:51.771 "name": "BaseBdev2", 00:26:51.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.771 "is_configured": false, 00:26:51.771 "data_offset": 0, 00:26:51.771 "data_size": 0 00:26:51.771 } 00:26:51.771 ] 00:26:51.771 }' 00:26:51.771 10:41:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:51.771 10:41:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:52.336 10:41:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:52.593 [2024-07-25 10:41:56.114072] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:52.593 [2024-07-25 10:41:56.114140] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12ec600 name Existed_Raid, state configuring 00:26:52.593 10:41:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:52.851 [2024-07-25 10:41:56.358758] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:52.851 [2024-07-25 10:41:56.358795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:52.851 [2024-07-25 10:41:56.358813] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:52.851 [2024-07-25 10:41:56.358823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:52.851 10:41:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:26:53.109 [2024-07-25 10:41:56.664514] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:53.109 BaseBdev1 00:26:53.109 10:41:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:53.109 10:41:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:53.109 10:41:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:53.109 10:41:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:26:53.109 10:41:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:53.109 10:41:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:53.109 10:41:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:53.367 10:41:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:53.624 [ 00:26:53.624 { 00:26:53.624 "name": "BaseBdev1", 00:26:53.624 "aliases": [ 00:26:53.624 "b1b404f5-72f7-4690-ac1a-f6b5ccd3d997" 00:26:53.624 ], 00:26:53.624 "product_name": "Malloc disk", 00:26:53.624 "block_size": 4128, 00:26:53.624 "num_blocks": 8192, 00:26:53.624 "uuid": "b1b404f5-72f7-4690-ac1a-f6b5ccd3d997", 00:26:53.624 "md_size": 32, 00:26:53.624 "md_interleave": true, 00:26:53.624 "dif_type": 0, 00:26:53.624 "assigned_rate_limits": { 00:26:53.624 "rw_ios_per_sec": 0, 00:26:53.624 "rw_mbytes_per_sec": 0, 00:26:53.624 "r_mbytes_per_sec": 0, 00:26:53.624 "w_mbytes_per_sec": 0 00:26:53.624 }, 00:26:53.624 "claimed": true, 00:26:53.624 "claim_type": "exclusive_write", 00:26:53.624 "zoned": false, 00:26:53.624 "supported_io_types": { 00:26:53.624 "read": true, 00:26:53.624 "write": true, 00:26:53.624 "unmap": true, 00:26:53.624 "flush": true, 00:26:53.624 "reset": true, 00:26:53.624 "nvme_admin": false, 00:26:53.624 "nvme_io": false, 00:26:53.624 "nvme_io_md": false, 00:26:53.624 "write_zeroes": true, 00:26:53.624 "zcopy": true, 00:26:53.624 "get_zone_info": false, 00:26:53.624 "zone_management": false, 00:26:53.624 "zone_append": false, 00:26:53.624 "compare": false, 00:26:53.624 "compare_and_write": false, 00:26:53.624 "abort": true, 00:26:53.624 "seek_hole": false, 00:26:53.624 "seek_data": false, 00:26:53.624 "copy": true, 00:26:53.624 "nvme_iov_md": false 00:26:53.624 }, 00:26:53.624 "memory_domains": [ 00:26:53.624 { 00:26:53.624 "dma_device_id": "system", 00:26:53.624 "dma_device_type": 1 00:26:53.624 }, 00:26:53.624 { 00:26:53.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.624 "dma_device_type": 2 00:26:53.624 } 00:26:53.624 ], 00:26:53.624 "driver_specific": {} 00:26:53.624 } 00:26:53.624 ] 00:26:53.624 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:26:53.624 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:53.624 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:53.624 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:53.624 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:53.624 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:53.624 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:53.624 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:53.624 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:53.624 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:53.624 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:53.625 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.625 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:53.882 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:53.882 "name": "Existed_Raid", 00:26:53.882 "uuid": "5242fb17-b36f-43f5-88e7-3e49577d56e5", 00:26:53.882 "strip_size_kb": 0, 00:26:53.882 "state": "configuring", 00:26:53.882 "raid_level": "raid1", 00:26:53.882 "superblock": true, 00:26:53.882 "num_base_bdevs": 2, 00:26:53.882 "num_base_bdevs_discovered": 1, 00:26:53.882 "num_base_bdevs_operational": 2, 00:26:53.882 "base_bdevs_list": [ 00:26:53.882 { 00:26:53.882 "name": "BaseBdev1", 00:26:53.882 "uuid": "b1b404f5-72f7-4690-ac1a-f6b5ccd3d997", 00:26:53.882 "is_configured": true, 00:26:53.882 "data_offset": 256, 00:26:53.882 "data_size": 7936 00:26:53.882 }, 00:26:53.882 { 00:26:53.882 "name": "BaseBdev2", 00:26:53.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.882 "is_configured": false, 00:26:53.882 "data_offset": 0, 00:26:53.882 "data_size": 0 00:26:53.882 } 00:26:53.882 ] 00:26:53.882 }' 00:26:53.882 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:53.882 10:41:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:54.448 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:54.706 [2024-07-25 10:41:58.264763] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:54.706 [2024-07-25 10:41:58.264816] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12ebe50 name Existed_Raid, state configuring 00:26:54.706 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:54.963 [2024-07-25 10:41:58.513448] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:54.963 [2024-07-25 10:41:58.514909] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:54.963 [2024-07-25 10:41:58.514943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.963 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:55.221 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:55.221 "name": "Existed_Raid", 00:26:55.221 "uuid": "7c2b05c0-a0e5-4384-8b10-c4a998a979cc", 00:26:55.221 "strip_size_kb": 0, 00:26:55.221 "state": "configuring", 00:26:55.221 "raid_level": "raid1", 00:26:55.221 "superblock": true, 00:26:55.221 "num_base_bdevs": 2, 00:26:55.221 "num_base_bdevs_discovered": 1, 00:26:55.221 "num_base_bdevs_operational": 2, 00:26:55.221 "base_bdevs_list": [ 00:26:55.221 { 00:26:55.221 "name": "BaseBdev1", 00:26:55.221 "uuid": "b1b404f5-72f7-4690-ac1a-f6b5ccd3d997", 00:26:55.221 "is_configured": true, 00:26:55.221 "data_offset": 256, 00:26:55.221 "data_size": 7936 00:26:55.221 }, 00:26:55.221 { 00:26:55.221 "name": "BaseBdev2", 00:26:55.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.221 "is_configured": false, 00:26:55.221 "data_offset": 0, 00:26:55.221 "data_size": 0 00:26:55.221 } 00:26:55.221 ] 00:26:55.221 }' 00:26:55.221 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:55.221 10:41:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:55.814 10:41:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:26:56.071 [2024-07-25 10:41:59.550361] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:56.071 [2024-07-25 10:41:59.550577] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x12eb720 00:26:56.071 [2024-07-25 10:41:59.550596] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:56.071 [2024-07-25 10:41:59.550660] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12eccb0 00:26:56.071 [2024-07-25 10:41:59.550759] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12eb720 00:26:56.071 [2024-07-25 10:41:59.550775] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x12eb720 00:26:56.071 [2024-07-25 10:41:59.550852] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:56.071 BaseBdev2 00:26:56.071 10:41:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:56.071 10:41:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:56.071 10:41:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:56.071 10:41:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:26:56.071 10:41:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:56.071 10:41:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:56.071 10:41:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:56.330 10:41:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:56.330 [ 00:26:56.330 { 00:26:56.330 "name": "BaseBdev2", 00:26:56.330 "aliases": [ 00:26:56.330 "7c1b3dba-8084-4a6f-9491-a3681b014824" 00:26:56.330 ], 00:26:56.330 "product_name": "Malloc disk", 00:26:56.330 "block_size": 4128, 00:26:56.330 "num_blocks": 8192, 00:26:56.330 "uuid": "7c1b3dba-8084-4a6f-9491-a3681b014824", 00:26:56.330 "md_size": 32, 00:26:56.330 "md_interleave": true, 00:26:56.330 "dif_type": 0, 00:26:56.330 "assigned_rate_limits": { 00:26:56.330 "rw_ios_per_sec": 0, 00:26:56.330 "rw_mbytes_per_sec": 0, 00:26:56.330 "r_mbytes_per_sec": 0, 00:26:56.330 "w_mbytes_per_sec": 0 00:26:56.330 }, 00:26:56.330 "claimed": true, 00:26:56.330 "claim_type": "exclusive_write", 00:26:56.330 "zoned": false, 00:26:56.330 "supported_io_types": { 00:26:56.330 "read": true, 00:26:56.330 "write": true, 00:26:56.330 "unmap": true, 00:26:56.330 "flush": true, 00:26:56.330 "reset": true, 00:26:56.330 "nvme_admin": false, 00:26:56.330 "nvme_io": false, 00:26:56.330 "nvme_io_md": false, 00:26:56.330 "write_zeroes": true, 00:26:56.330 "zcopy": true, 00:26:56.330 "get_zone_info": false, 00:26:56.330 "zone_management": false, 00:26:56.330 "zone_append": false, 00:26:56.330 "compare": false, 00:26:56.330 "compare_and_write": false, 00:26:56.330 "abort": true, 00:26:56.330 "seek_hole": false, 00:26:56.330 "seek_data": false, 00:26:56.330 "copy": true, 00:26:56.330 "nvme_iov_md": false 00:26:56.330 }, 00:26:56.330 "memory_domains": [ 00:26:56.330 { 00:26:56.330 "dma_device_id": "system", 00:26:56.330 "dma_device_type": 1 00:26:56.330 }, 00:26:56.330 { 00:26:56.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.330 "dma_device_type": 2 00:26:56.330 } 00:26:56.330 ], 00:26:56.330 "driver_specific": {} 00:26:56.330 } 00:26:56.330 ] 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.588 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:56.845 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:56.845 "name": "Existed_Raid", 00:26:56.845 "uuid": "7c2b05c0-a0e5-4384-8b10-c4a998a979cc", 00:26:56.845 "strip_size_kb": 0, 00:26:56.845 "state": "online", 00:26:56.845 "raid_level": "raid1", 00:26:56.845 "superblock": true, 00:26:56.845 "num_base_bdevs": 2, 00:26:56.845 "num_base_bdevs_discovered": 2, 00:26:56.845 "num_base_bdevs_operational": 2, 00:26:56.845 "base_bdevs_list": [ 00:26:56.845 { 00:26:56.845 "name": "BaseBdev1", 00:26:56.845 "uuid": "b1b404f5-72f7-4690-ac1a-f6b5ccd3d997", 00:26:56.845 "is_configured": true, 00:26:56.845 "data_offset": 256, 00:26:56.845 "data_size": 7936 00:26:56.845 }, 00:26:56.845 { 00:26:56.845 "name": "BaseBdev2", 00:26:56.845 "uuid": "7c1b3dba-8084-4a6f-9491-a3681b014824", 00:26:56.845 "is_configured": true, 00:26:56.845 "data_offset": 256, 00:26:56.845 "data_size": 7936 00:26:56.845 } 00:26:56.845 ] 00:26:56.845 }' 00:26:56.845 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:56.845 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:57.409 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:57.409 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:57.409 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:57.409 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:57.409 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:57.409 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:26:57.409 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:57.409 10:42:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:57.409 [2024-07-25 10:42:01.098779] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:57.666 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:57.666 "name": "Existed_Raid", 00:26:57.666 "aliases": [ 00:26:57.666 "7c2b05c0-a0e5-4384-8b10-c4a998a979cc" 00:26:57.666 ], 00:26:57.666 "product_name": "Raid Volume", 00:26:57.666 "block_size": 4128, 00:26:57.666 "num_blocks": 7936, 00:26:57.666 "uuid": "7c2b05c0-a0e5-4384-8b10-c4a998a979cc", 00:26:57.666 "md_size": 32, 00:26:57.666 "md_interleave": true, 00:26:57.666 "dif_type": 0, 00:26:57.666 "assigned_rate_limits": { 00:26:57.666 "rw_ios_per_sec": 0, 00:26:57.666 "rw_mbytes_per_sec": 0, 00:26:57.666 "r_mbytes_per_sec": 0, 00:26:57.666 "w_mbytes_per_sec": 0 00:26:57.666 }, 00:26:57.666 "claimed": false, 00:26:57.666 "zoned": false, 00:26:57.666 "supported_io_types": { 00:26:57.666 "read": true, 00:26:57.666 "write": true, 00:26:57.666 "unmap": false, 00:26:57.666 "flush": false, 00:26:57.666 "reset": true, 00:26:57.666 "nvme_admin": false, 00:26:57.666 "nvme_io": false, 00:26:57.666 "nvme_io_md": false, 00:26:57.666 "write_zeroes": true, 00:26:57.666 "zcopy": false, 00:26:57.666 "get_zone_info": false, 00:26:57.666 "zone_management": false, 00:26:57.666 "zone_append": false, 00:26:57.666 "compare": false, 00:26:57.666 "compare_and_write": false, 00:26:57.666 "abort": false, 00:26:57.666 "seek_hole": false, 00:26:57.666 "seek_data": false, 00:26:57.666 "copy": false, 00:26:57.666 "nvme_iov_md": false 00:26:57.666 }, 00:26:57.666 "memory_domains": [ 00:26:57.666 { 00:26:57.666 "dma_device_id": "system", 00:26:57.666 "dma_device_type": 1 00:26:57.666 }, 00:26:57.666 { 00:26:57.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.666 "dma_device_type": 2 00:26:57.666 }, 00:26:57.666 { 00:26:57.666 "dma_device_id": "system", 00:26:57.666 "dma_device_type": 1 00:26:57.666 }, 00:26:57.666 { 00:26:57.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.666 "dma_device_type": 2 00:26:57.666 } 00:26:57.666 ], 00:26:57.666 "driver_specific": { 00:26:57.666 "raid": { 00:26:57.666 "uuid": "7c2b05c0-a0e5-4384-8b10-c4a998a979cc", 00:26:57.666 "strip_size_kb": 0, 00:26:57.666 "state": "online", 00:26:57.666 "raid_level": "raid1", 00:26:57.666 "superblock": true, 00:26:57.666 "num_base_bdevs": 2, 00:26:57.666 "num_base_bdevs_discovered": 2, 00:26:57.666 "num_base_bdevs_operational": 2, 00:26:57.666 "base_bdevs_list": [ 00:26:57.666 { 00:26:57.666 "name": "BaseBdev1", 00:26:57.666 "uuid": "b1b404f5-72f7-4690-ac1a-f6b5ccd3d997", 00:26:57.666 "is_configured": true, 00:26:57.666 "data_offset": 256, 00:26:57.666 "data_size": 7936 00:26:57.666 }, 00:26:57.666 { 00:26:57.666 "name": "BaseBdev2", 00:26:57.666 "uuid": "7c1b3dba-8084-4a6f-9491-a3681b014824", 00:26:57.666 "is_configured": true, 00:26:57.666 "data_offset": 256, 00:26:57.666 "data_size": 7936 00:26:57.666 } 00:26:57.666 ] 00:26:57.666 } 00:26:57.666 } 00:26:57.666 }' 00:26:57.666 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:57.666 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:57.666 BaseBdev2' 00:26:57.666 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:57.666 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:57.666 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:57.924 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:57.924 "name": "BaseBdev1", 00:26:57.924 "aliases": [ 00:26:57.924 "b1b404f5-72f7-4690-ac1a-f6b5ccd3d997" 00:26:57.924 ], 00:26:57.924 "product_name": "Malloc disk", 00:26:57.924 "block_size": 4128, 00:26:57.924 "num_blocks": 8192, 00:26:57.924 "uuid": "b1b404f5-72f7-4690-ac1a-f6b5ccd3d997", 00:26:57.924 "md_size": 32, 00:26:57.924 "md_interleave": true, 00:26:57.924 "dif_type": 0, 00:26:57.924 "assigned_rate_limits": { 00:26:57.924 "rw_ios_per_sec": 0, 00:26:57.924 "rw_mbytes_per_sec": 0, 00:26:57.924 "r_mbytes_per_sec": 0, 00:26:57.924 "w_mbytes_per_sec": 0 00:26:57.924 }, 00:26:57.924 "claimed": true, 00:26:57.924 "claim_type": "exclusive_write", 00:26:57.924 "zoned": false, 00:26:57.924 "supported_io_types": { 00:26:57.924 "read": true, 00:26:57.924 "write": true, 00:26:57.924 "unmap": true, 00:26:57.924 "flush": true, 00:26:57.924 "reset": true, 00:26:57.924 "nvme_admin": false, 00:26:57.924 "nvme_io": false, 00:26:57.924 "nvme_io_md": false, 00:26:57.924 "write_zeroes": true, 00:26:57.924 "zcopy": true, 00:26:57.924 "get_zone_info": false, 00:26:57.924 "zone_management": false, 00:26:57.924 "zone_append": false, 00:26:57.924 "compare": false, 00:26:57.924 "compare_and_write": false, 00:26:57.924 "abort": true, 00:26:57.924 "seek_hole": false, 00:26:57.924 "seek_data": false, 00:26:57.924 "copy": true, 00:26:57.924 "nvme_iov_md": false 00:26:57.924 }, 00:26:57.924 "memory_domains": [ 00:26:57.924 { 00:26:57.924 "dma_device_id": "system", 00:26:57.924 "dma_device_type": 1 00:26:57.924 }, 00:26:57.924 { 00:26:57.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.924 "dma_device_type": 2 00:26:57.924 } 00:26:57.924 ], 00:26:57.924 "driver_specific": {} 00:26:57.924 }' 00:26:57.924 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:57.924 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:57.924 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:26:57.924 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:57.924 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:57.924 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:57.924 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:57.924 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.181 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:26:58.181 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.182 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.182 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:58.182 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:58.182 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:58.182 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:58.439 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:58.439 "name": "BaseBdev2", 00:26:58.439 "aliases": [ 00:26:58.439 "7c1b3dba-8084-4a6f-9491-a3681b014824" 00:26:58.439 ], 00:26:58.439 "product_name": "Malloc disk", 00:26:58.439 "block_size": 4128, 00:26:58.439 "num_blocks": 8192, 00:26:58.439 "uuid": "7c1b3dba-8084-4a6f-9491-a3681b014824", 00:26:58.439 "md_size": 32, 00:26:58.439 "md_interleave": true, 00:26:58.439 "dif_type": 0, 00:26:58.439 "assigned_rate_limits": { 00:26:58.439 "rw_ios_per_sec": 0, 00:26:58.439 "rw_mbytes_per_sec": 0, 00:26:58.439 "r_mbytes_per_sec": 0, 00:26:58.439 "w_mbytes_per_sec": 0 00:26:58.439 }, 00:26:58.439 "claimed": true, 00:26:58.439 "claim_type": "exclusive_write", 00:26:58.439 "zoned": false, 00:26:58.439 "supported_io_types": { 00:26:58.439 "read": true, 00:26:58.439 "write": true, 00:26:58.439 "unmap": true, 00:26:58.439 "flush": true, 00:26:58.439 "reset": true, 00:26:58.439 "nvme_admin": false, 00:26:58.439 "nvme_io": false, 00:26:58.439 "nvme_io_md": false, 00:26:58.439 "write_zeroes": true, 00:26:58.439 "zcopy": true, 00:26:58.439 "get_zone_info": false, 00:26:58.439 "zone_management": false, 00:26:58.439 "zone_append": false, 00:26:58.439 "compare": false, 00:26:58.439 "compare_and_write": false, 00:26:58.439 "abort": true, 00:26:58.439 "seek_hole": false, 00:26:58.439 "seek_data": false, 00:26:58.439 "copy": true, 00:26:58.439 "nvme_iov_md": false 00:26:58.439 }, 00:26:58.439 "memory_domains": [ 00:26:58.439 { 00:26:58.439 "dma_device_id": "system", 00:26:58.439 "dma_device_type": 1 00:26:58.439 }, 00:26:58.439 { 00:26:58.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:58.439 "dma_device_type": 2 00:26:58.439 } 00:26:58.439 ], 00:26:58.439 "driver_specific": {} 00:26:58.439 }' 00:26:58.439 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:58.439 10:42:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:58.439 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:26:58.439 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:58.439 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:58.439 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:58.439 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.439 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.697 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:26:58.697 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.697 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.697 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:58.697 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:58.954 [2024-07-25 10:42:02.474289] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.954 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:59.212 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.212 "name": "Existed_Raid", 00:26:59.212 "uuid": "7c2b05c0-a0e5-4384-8b10-c4a998a979cc", 00:26:59.212 "strip_size_kb": 0, 00:26:59.212 "state": "online", 00:26:59.212 "raid_level": "raid1", 00:26:59.212 "superblock": true, 00:26:59.212 "num_base_bdevs": 2, 00:26:59.212 "num_base_bdevs_discovered": 1, 00:26:59.212 "num_base_bdevs_operational": 1, 00:26:59.212 "base_bdevs_list": [ 00:26:59.212 { 00:26:59.212 "name": null, 00:26:59.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.212 "is_configured": false, 00:26:59.212 "data_offset": 256, 00:26:59.212 "data_size": 7936 00:26:59.212 }, 00:26:59.212 { 00:26:59.212 "name": "BaseBdev2", 00:26:59.212 "uuid": "7c1b3dba-8084-4a6f-9491-a3681b014824", 00:26:59.212 "is_configured": true, 00:26:59.212 "data_offset": 256, 00:26:59.212 "data_size": 7936 00:26:59.212 } 00:26:59.212 ] 00:26:59.212 }' 00:26:59.212 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.212 10:42:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:59.781 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:59.781 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:59.781 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.781 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:00.038 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:00.038 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:00.038 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:00.296 [2024-07-25 10:42:03.776651] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:00.296 [2024-07-25 10:42:03.776751] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:00.296 [2024-07-25 10:42:03.789220] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:00.296 [2024-07-25 10:42:03.789272] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:00.296 [2024-07-25 10:42:03.789283] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12eb720 name Existed_Raid, state offline 00:27:00.296 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:00.296 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:00.296 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.296 10:42:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 2475986 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 2475986 ']' 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 2475986 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2475986 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2475986' 00:27:00.554 killing process with pid 2475986 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 2475986 00:27:00.554 [2024-07-25 10:42:04.080679] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:00.554 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 2475986 00:27:00.554 [2024-07-25 10:42:04.081856] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:00.812 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:27:00.812 00:27:00.812 real 0m10.722s 00:27:00.812 user 0m19.336s 00:27:00.812 sys 0m1.529s 00:27:00.812 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:00.812 10:42:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:00.812 ************************************ 00:27:00.812 END TEST raid_state_function_test_sb_md_interleaved 00:27:00.812 ************************************ 00:27:00.812 10:42:04 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:27:00.812 10:42:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:00.812 10:42:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:00.812 10:42:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:00.812 ************************************ 00:27:00.812 START TEST raid_superblock_test_md_interleaved 00:27:00.812 ************************************ 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=2477531 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 2477531 /var/tmp/spdk-raid.sock 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 2477531 ']' 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:00.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:00.812 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:00.812 [2024-07-25 10:42:04.467270] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:00.813 [2024-07-25 10:42:04.467352] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477531 ] 00:27:01.071 [2024-07-25 10:42:04.586007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.071 [2024-07-25 10:42:04.745937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.329 [2024-07-25 10:42:04.823920] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:01.329 [2024-07-25 10:42:04.823955] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:01.329 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:01.329 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:27:01.329 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:27:01.329 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:01.329 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:27:01.329 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:27:01.329 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:01.329 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:01.329 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:01.329 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:01.329 10:42:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:27:01.587 malloc1 00:27:01.587 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:01.845 [2024-07-25 10:42:05.368905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:01.845 [2024-07-25 10:42:05.368961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.845 [2024-07-25 10:42:05.368987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbd8c00 00:27:01.845 [2024-07-25 10:42:05.369000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.845 [2024-07-25 10:42:05.370486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.845 [2024-07-25 10:42:05.370510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:01.845 pt1 00:27:01.845 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:01.845 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:01.845 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:27:01.845 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:27:01.845 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:01.845 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:01.845 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:01.845 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:01.845 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:27:02.102 malloc2 00:27:02.102 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:02.359 [2024-07-25 10:42:05.868342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:02.359 [2024-07-25 10:42:05.868425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.359 [2024-07-25 10:42:05.868447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd592b0 00:27:02.359 [2024-07-25 10:42:05.868459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.359 [2024-07-25 10:42:05.869820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.359 [2024-07-25 10:42:05.869843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:02.359 pt2 00:27:02.359 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:02.359 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:02.359 10:42:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:27:02.617 [2024-07-25 10:42:06.117045] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:02.617 [2024-07-25 10:42:06.118335] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:02.617 [2024-07-25 10:42:06.118520] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xd5aa20 00:27:02.617 [2024-07-25 10:42:06.118535] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:02.617 [2024-07-25 10:42:06.118613] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbd6c30 00:27:02.617 [2024-07-25 10:42:06.118703] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd5aa20 00:27:02.618 [2024-07-25 10:42:06.118715] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd5aa20 00:27:02.618 [2024-07-25 10:42:06.118791] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.618 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:02.618 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:02.618 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:02.618 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:02.618 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:02.618 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:02.618 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.618 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.618 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.618 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.618 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.618 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.875 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.875 "name": "raid_bdev1", 00:27:02.875 "uuid": "4fed4877-123f-45d9-bf29-e55759707cfd", 00:27:02.875 "strip_size_kb": 0, 00:27:02.875 "state": "online", 00:27:02.875 "raid_level": "raid1", 00:27:02.875 "superblock": true, 00:27:02.875 "num_base_bdevs": 2, 00:27:02.875 "num_base_bdevs_discovered": 2, 00:27:02.875 "num_base_bdevs_operational": 2, 00:27:02.875 "base_bdevs_list": [ 00:27:02.875 { 00:27:02.875 "name": "pt1", 00:27:02.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:02.875 "is_configured": true, 00:27:02.875 "data_offset": 256, 00:27:02.875 "data_size": 7936 00:27:02.875 }, 00:27:02.875 { 00:27:02.875 "name": "pt2", 00:27:02.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:02.875 "is_configured": true, 00:27:02.875 "data_offset": 256, 00:27:02.875 "data_size": 7936 00:27:02.875 } 00:27:02.875 ] 00:27:02.875 }' 00:27:02.875 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.875 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:03.439 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:27:03.439 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:03.439 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:03.439 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:03.439 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:03.439 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:27:03.439 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:03.439 10:42:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:03.694 [2024-07-25 10:42:07.192124] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:03.694 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:03.694 "name": "raid_bdev1", 00:27:03.694 "aliases": [ 00:27:03.694 "4fed4877-123f-45d9-bf29-e55759707cfd" 00:27:03.694 ], 00:27:03.694 "product_name": "Raid Volume", 00:27:03.694 "block_size": 4128, 00:27:03.694 "num_blocks": 7936, 00:27:03.694 "uuid": "4fed4877-123f-45d9-bf29-e55759707cfd", 00:27:03.694 "md_size": 32, 00:27:03.694 "md_interleave": true, 00:27:03.694 "dif_type": 0, 00:27:03.694 "assigned_rate_limits": { 00:27:03.694 "rw_ios_per_sec": 0, 00:27:03.694 "rw_mbytes_per_sec": 0, 00:27:03.694 "r_mbytes_per_sec": 0, 00:27:03.694 "w_mbytes_per_sec": 0 00:27:03.694 }, 00:27:03.694 "claimed": false, 00:27:03.694 "zoned": false, 00:27:03.694 "supported_io_types": { 00:27:03.694 "read": true, 00:27:03.694 "write": true, 00:27:03.694 "unmap": false, 00:27:03.694 "flush": false, 00:27:03.694 "reset": true, 00:27:03.694 "nvme_admin": false, 00:27:03.694 "nvme_io": false, 00:27:03.694 "nvme_io_md": false, 00:27:03.694 "write_zeroes": true, 00:27:03.694 "zcopy": false, 00:27:03.694 "get_zone_info": false, 00:27:03.694 "zone_management": false, 00:27:03.694 "zone_append": false, 00:27:03.694 "compare": false, 00:27:03.694 "compare_and_write": false, 00:27:03.694 "abort": false, 00:27:03.694 "seek_hole": false, 00:27:03.694 "seek_data": false, 00:27:03.694 "copy": false, 00:27:03.694 "nvme_iov_md": false 00:27:03.694 }, 00:27:03.694 "memory_domains": [ 00:27:03.694 { 00:27:03.694 "dma_device_id": "system", 00:27:03.694 "dma_device_type": 1 00:27:03.694 }, 00:27:03.694 { 00:27:03.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.694 "dma_device_type": 2 00:27:03.694 }, 00:27:03.694 { 00:27:03.694 "dma_device_id": "system", 00:27:03.694 "dma_device_type": 1 00:27:03.694 }, 00:27:03.694 { 00:27:03.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.694 "dma_device_type": 2 00:27:03.694 } 00:27:03.694 ], 00:27:03.695 "driver_specific": { 00:27:03.695 "raid": { 00:27:03.695 "uuid": "4fed4877-123f-45d9-bf29-e55759707cfd", 00:27:03.695 "strip_size_kb": 0, 00:27:03.695 "state": "online", 00:27:03.695 "raid_level": "raid1", 00:27:03.695 "superblock": true, 00:27:03.695 "num_base_bdevs": 2, 00:27:03.695 "num_base_bdevs_discovered": 2, 00:27:03.695 "num_base_bdevs_operational": 2, 00:27:03.695 "base_bdevs_list": [ 00:27:03.695 { 00:27:03.695 "name": "pt1", 00:27:03.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:03.695 "is_configured": true, 00:27:03.695 "data_offset": 256, 00:27:03.695 "data_size": 7936 00:27:03.695 }, 00:27:03.695 { 00:27:03.695 "name": "pt2", 00:27:03.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:03.695 "is_configured": true, 00:27:03.695 "data_offset": 256, 00:27:03.695 "data_size": 7936 00:27:03.695 } 00:27:03.695 ] 00:27:03.695 } 00:27:03.695 } 00:27:03.695 }' 00:27:03.695 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:03.695 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:03.695 pt2' 00:27:03.695 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:03.695 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:03.695 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:03.951 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:03.951 "name": "pt1", 00:27:03.951 "aliases": [ 00:27:03.951 "00000000-0000-0000-0000-000000000001" 00:27:03.951 ], 00:27:03.951 "product_name": "passthru", 00:27:03.951 "block_size": 4128, 00:27:03.951 "num_blocks": 8192, 00:27:03.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:03.951 "md_size": 32, 00:27:03.951 "md_interleave": true, 00:27:03.951 "dif_type": 0, 00:27:03.951 "assigned_rate_limits": { 00:27:03.951 "rw_ios_per_sec": 0, 00:27:03.951 "rw_mbytes_per_sec": 0, 00:27:03.951 "r_mbytes_per_sec": 0, 00:27:03.951 "w_mbytes_per_sec": 0 00:27:03.951 }, 00:27:03.951 "claimed": true, 00:27:03.951 "claim_type": "exclusive_write", 00:27:03.951 "zoned": false, 00:27:03.951 "supported_io_types": { 00:27:03.951 "read": true, 00:27:03.951 "write": true, 00:27:03.951 "unmap": true, 00:27:03.951 "flush": true, 00:27:03.951 "reset": true, 00:27:03.951 "nvme_admin": false, 00:27:03.951 "nvme_io": false, 00:27:03.951 "nvme_io_md": false, 00:27:03.951 "write_zeroes": true, 00:27:03.951 "zcopy": true, 00:27:03.951 "get_zone_info": false, 00:27:03.951 "zone_management": false, 00:27:03.951 "zone_append": false, 00:27:03.951 "compare": false, 00:27:03.951 "compare_and_write": false, 00:27:03.951 "abort": true, 00:27:03.951 "seek_hole": false, 00:27:03.951 "seek_data": false, 00:27:03.951 "copy": true, 00:27:03.951 "nvme_iov_md": false 00:27:03.951 }, 00:27:03.951 "memory_domains": [ 00:27:03.951 { 00:27:03.951 "dma_device_id": "system", 00:27:03.951 "dma_device_type": 1 00:27:03.951 }, 00:27:03.951 { 00:27:03.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.951 "dma_device_type": 2 00:27:03.951 } 00:27:03.951 ], 00:27:03.951 "driver_specific": { 00:27:03.951 "passthru": { 00:27:03.951 "name": "pt1", 00:27:03.951 "base_bdev_name": "malloc1" 00:27:03.951 } 00:27:03.951 } 00:27:03.951 }' 00:27:03.951 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:03.951 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:03.951 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:03.951 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:03.951 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:03.951 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:03.951 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.208 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.208 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:04.208 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.208 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.209 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:04.209 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:04.209 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:04.209 10:42:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:04.466 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:04.466 "name": "pt2", 00:27:04.466 "aliases": [ 00:27:04.466 "00000000-0000-0000-0000-000000000002" 00:27:04.466 ], 00:27:04.466 "product_name": "passthru", 00:27:04.466 "block_size": 4128, 00:27:04.466 "num_blocks": 8192, 00:27:04.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:04.466 "md_size": 32, 00:27:04.466 "md_interleave": true, 00:27:04.466 "dif_type": 0, 00:27:04.466 "assigned_rate_limits": { 00:27:04.466 "rw_ios_per_sec": 0, 00:27:04.466 "rw_mbytes_per_sec": 0, 00:27:04.466 "r_mbytes_per_sec": 0, 00:27:04.466 "w_mbytes_per_sec": 0 00:27:04.466 }, 00:27:04.466 "claimed": true, 00:27:04.466 "claim_type": "exclusive_write", 00:27:04.466 "zoned": false, 00:27:04.466 "supported_io_types": { 00:27:04.466 "read": true, 00:27:04.466 "write": true, 00:27:04.466 "unmap": true, 00:27:04.466 "flush": true, 00:27:04.466 "reset": true, 00:27:04.466 "nvme_admin": false, 00:27:04.466 "nvme_io": false, 00:27:04.466 "nvme_io_md": false, 00:27:04.466 "write_zeroes": true, 00:27:04.466 "zcopy": true, 00:27:04.466 "get_zone_info": false, 00:27:04.466 "zone_management": false, 00:27:04.466 "zone_append": false, 00:27:04.466 "compare": false, 00:27:04.466 "compare_and_write": false, 00:27:04.466 "abort": true, 00:27:04.466 "seek_hole": false, 00:27:04.466 "seek_data": false, 00:27:04.466 "copy": true, 00:27:04.466 "nvme_iov_md": false 00:27:04.466 }, 00:27:04.466 "memory_domains": [ 00:27:04.466 { 00:27:04.466 "dma_device_id": "system", 00:27:04.466 "dma_device_type": 1 00:27:04.466 }, 00:27:04.466 { 00:27:04.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.466 "dma_device_type": 2 00:27:04.466 } 00:27:04.466 ], 00:27:04.466 "driver_specific": { 00:27:04.466 "passthru": { 00:27:04.466 "name": "pt2", 00:27:04.466 "base_bdev_name": "malloc2" 00:27:04.466 } 00:27:04.466 } 00:27:04.466 }' 00:27:04.466 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.466 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.466 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:04.466 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.466 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.466 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:04.467 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.724 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.724 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:04.724 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.724 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.724 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:04.724 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:04.724 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:27:04.982 [2024-07-25 10:42:08.515676] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:04.982 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4fed4877-123f-45d9-bf29-e55759707cfd 00:27:04.982 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 4fed4877-123f-45d9-bf29-e55759707cfd ']' 00:27:04.982 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:05.240 [2024-07-25 10:42:08.760133] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:05.240 [2024-07-25 10:42:08.760174] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:05.240 [2024-07-25 10:42:08.760247] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:05.240 [2024-07-25 10:42:08.760310] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:05.240 [2024-07-25 10:42:08.760324] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd5aa20 name raid_bdev1, state offline 00:27:05.240 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.240 10:42:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:27:05.499 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:27:05.499 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:27:05.499 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:05.499 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:05.757 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:05.757 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:06.013 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:06.013 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:06.270 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:27:06.270 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:06.270 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:27:06.270 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:06.270 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:06.270 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.270 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:06.270 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.270 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:06.270 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.270 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:06.270 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:06.270 10:42:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:06.527 [2024-07-25 10:42:09.999429] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:06.527 [2024-07-25 10:42:10.000723] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:06.527 [2024-07-25 10:42:10.000802] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:06.527 [2024-07-25 10:42:10.000855] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:06.527 [2024-07-25 10:42:10.000913] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:06.527 [2024-07-25 10:42:10.000925] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbd90a0 name raid_bdev1, state configuring 00:27:06.527 request: 00:27:06.527 { 00:27:06.527 "name": "raid_bdev1", 00:27:06.527 "raid_level": "raid1", 00:27:06.527 "base_bdevs": [ 00:27:06.527 "malloc1", 00:27:06.527 "malloc2" 00:27:06.527 ], 00:27:06.527 "superblock": false, 00:27:06.527 "method": "bdev_raid_create", 00:27:06.527 "req_id": 1 00:27:06.527 } 00:27:06.527 Got JSON-RPC error response 00:27:06.527 response: 00:27:06.527 { 00:27:06.527 "code": -17, 00:27:06.527 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:06.527 } 00:27:06.527 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:27:06.527 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:06.527 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:06.527 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:06.527 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.527 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:27:06.783 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:27:06.783 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:27:06.783 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:07.040 [2024-07-25 10:42:10.500711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:07.040 [2024-07-25 10:42:10.500766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.040 [2024-07-25 10:42:10.500786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbd8e30 00:27:07.040 [2024-07-25 10:42:10.500798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.040 [2024-07-25 10:42:10.502178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.040 [2024-07-25 10:42:10.502202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:07.040 [2024-07-25 10:42:10.502262] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:07.040 [2024-07-25 10:42:10.502294] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:07.040 pt1 00:27:07.040 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:07.040 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:07.040 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:07.040 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:07.040 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:07.040 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:07.040 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:07.040 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:07.040 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:07.040 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:07.040 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.040 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.296 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:07.296 "name": "raid_bdev1", 00:27:07.296 "uuid": "4fed4877-123f-45d9-bf29-e55759707cfd", 00:27:07.296 "strip_size_kb": 0, 00:27:07.296 "state": "configuring", 00:27:07.296 "raid_level": "raid1", 00:27:07.296 "superblock": true, 00:27:07.296 "num_base_bdevs": 2, 00:27:07.296 "num_base_bdevs_discovered": 1, 00:27:07.296 "num_base_bdevs_operational": 2, 00:27:07.296 "base_bdevs_list": [ 00:27:07.296 { 00:27:07.296 "name": "pt1", 00:27:07.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:07.296 "is_configured": true, 00:27:07.296 "data_offset": 256, 00:27:07.296 "data_size": 7936 00:27:07.296 }, 00:27:07.296 { 00:27:07.296 "name": null, 00:27:07.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:07.296 "is_configured": false, 00:27:07.296 "data_offset": 256, 00:27:07.296 "data_size": 7936 00:27:07.296 } 00:27:07.296 ] 00:27:07.296 }' 00:27:07.296 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:07.296 10:42:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:07.859 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:27:07.859 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:27:07.859 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:07.859 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:07.859 [2024-07-25 10:42:11.499495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:07.859 [2024-07-25 10:42:11.499559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.859 [2024-07-25 10:42:11.499579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbd8510 00:27:07.859 [2024-07-25 10:42:11.499591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.859 [2024-07-25 10:42:11.499782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.859 [2024-07-25 10:42:11.499803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:07.859 [2024-07-25 10:42:11.499854] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:07.859 [2024-07-25 10:42:11.499876] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:07.859 [2024-07-25 10:42:11.499981] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xbd7330 00:27:07.859 [2024-07-25 10:42:11.499994] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:07.859 [2024-07-25 10:42:11.500042] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd5da40 00:27:07.860 [2024-07-25 10:42:11.500153] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbd7330 00:27:07.860 [2024-07-25 10:42:11.500167] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbd7330 00:27:07.860 [2024-07-25 10:42:11.500230] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:07.860 pt2 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.860 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.117 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:08.117 "name": "raid_bdev1", 00:27:08.117 "uuid": "4fed4877-123f-45d9-bf29-e55759707cfd", 00:27:08.117 "strip_size_kb": 0, 00:27:08.117 "state": "online", 00:27:08.117 "raid_level": "raid1", 00:27:08.117 "superblock": true, 00:27:08.117 "num_base_bdevs": 2, 00:27:08.117 "num_base_bdevs_discovered": 2, 00:27:08.117 "num_base_bdevs_operational": 2, 00:27:08.117 "base_bdevs_list": [ 00:27:08.117 { 00:27:08.117 "name": "pt1", 00:27:08.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:08.117 "is_configured": true, 00:27:08.117 "data_offset": 256, 00:27:08.117 "data_size": 7936 00:27:08.117 }, 00:27:08.117 { 00:27:08.117 "name": "pt2", 00:27:08.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:08.117 "is_configured": true, 00:27:08.117 "data_offset": 256, 00:27:08.117 "data_size": 7936 00:27:08.117 } 00:27:08.117 ] 00:27:08.117 }' 00:27:08.117 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:08.117 10:42:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:08.683 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:27:08.683 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:08.683 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:08.683 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:08.683 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:08.683 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:27:08.683 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:08.683 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:08.941 [2024-07-25 10:42:12.542431] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:08.941 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:08.941 "name": "raid_bdev1", 00:27:08.941 "aliases": [ 00:27:08.941 "4fed4877-123f-45d9-bf29-e55759707cfd" 00:27:08.941 ], 00:27:08.941 "product_name": "Raid Volume", 00:27:08.941 "block_size": 4128, 00:27:08.941 "num_blocks": 7936, 00:27:08.941 "uuid": "4fed4877-123f-45d9-bf29-e55759707cfd", 00:27:08.941 "md_size": 32, 00:27:08.941 "md_interleave": true, 00:27:08.941 "dif_type": 0, 00:27:08.941 "assigned_rate_limits": { 00:27:08.942 "rw_ios_per_sec": 0, 00:27:08.942 "rw_mbytes_per_sec": 0, 00:27:08.942 "r_mbytes_per_sec": 0, 00:27:08.942 "w_mbytes_per_sec": 0 00:27:08.942 }, 00:27:08.942 "claimed": false, 00:27:08.942 "zoned": false, 00:27:08.942 "supported_io_types": { 00:27:08.942 "read": true, 00:27:08.942 "write": true, 00:27:08.942 "unmap": false, 00:27:08.942 "flush": false, 00:27:08.942 "reset": true, 00:27:08.942 "nvme_admin": false, 00:27:08.942 "nvme_io": false, 00:27:08.942 "nvme_io_md": false, 00:27:08.942 "write_zeroes": true, 00:27:08.942 "zcopy": false, 00:27:08.942 "get_zone_info": false, 00:27:08.942 "zone_management": false, 00:27:08.942 "zone_append": false, 00:27:08.942 "compare": false, 00:27:08.942 "compare_and_write": false, 00:27:08.942 "abort": false, 00:27:08.942 "seek_hole": false, 00:27:08.942 "seek_data": false, 00:27:08.942 "copy": false, 00:27:08.942 "nvme_iov_md": false 00:27:08.942 }, 00:27:08.942 "memory_domains": [ 00:27:08.942 { 00:27:08.942 "dma_device_id": "system", 00:27:08.942 "dma_device_type": 1 00:27:08.942 }, 00:27:08.942 { 00:27:08.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:08.942 "dma_device_type": 2 00:27:08.942 }, 00:27:08.942 { 00:27:08.942 "dma_device_id": "system", 00:27:08.942 "dma_device_type": 1 00:27:08.942 }, 00:27:08.942 { 00:27:08.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:08.942 "dma_device_type": 2 00:27:08.942 } 00:27:08.942 ], 00:27:08.942 "driver_specific": { 00:27:08.942 "raid": { 00:27:08.942 "uuid": "4fed4877-123f-45d9-bf29-e55759707cfd", 00:27:08.942 "strip_size_kb": 0, 00:27:08.942 "state": "online", 00:27:08.942 "raid_level": "raid1", 00:27:08.942 "superblock": true, 00:27:08.942 "num_base_bdevs": 2, 00:27:08.942 "num_base_bdevs_discovered": 2, 00:27:08.942 "num_base_bdevs_operational": 2, 00:27:08.942 "base_bdevs_list": [ 00:27:08.942 { 00:27:08.942 "name": "pt1", 00:27:08.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:08.942 "is_configured": true, 00:27:08.942 "data_offset": 256, 00:27:08.942 "data_size": 7936 00:27:08.942 }, 00:27:08.942 { 00:27:08.942 "name": "pt2", 00:27:08.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:08.942 "is_configured": true, 00:27:08.942 "data_offset": 256, 00:27:08.942 "data_size": 7936 00:27:08.942 } 00:27:08.942 ] 00:27:08.942 } 00:27:08.942 } 00:27:08.942 }' 00:27:08.942 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:08.942 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:08.942 pt2' 00:27:08.942 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:08.942 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:08.942 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:09.199 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:09.200 "name": "pt1", 00:27:09.200 "aliases": [ 00:27:09.200 "00000000-0000-0000-0000-000000000001" 00:27:09.200 ], 00:27:09.200 "product_name": "passthru", 00:27:09.200 "block_size": 4128, 00:27:09.200 "num_blocks": 8192, 00:27:09.200 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:09.200 "md_size": 32, 00:27:09.200 "md_interleave": true, 00:27:09.200 "dif_type": 0, 00:27:09.200 "assigned_rate_limits": { 00:27:09.200 "rw_ios_per_sec": 0, 00:27:09.200 "rw_mbytes_per_sec": 0, 00:27:09.200 "r_mbytes_per_sec": 0, 00:27:09.200 "w_mbytes_per_sec": 0 00:27:09.200 }, 00:27:09.200 "claimed": true, 00:27:09.200 "claim_type": "exclusive_write", 00:27:09.200 "zoned": false, 00:27:09.200 "supported_io_types": { 00:27:09.200 "read": true, 00:27:09.200 "write": true, 00:27:09.200 "unmap": true, 00:27:09.200 "flush": true, 00:27:09.200 "reset": true, 00:27:09.200 "nvme_admin": false, 00:27:09.200 "nvme_io": false, 00:27:09.200 "nvme_io_md": false, 00:27:09.200 "write_zeroes": true, 00:27:09.200 "zcopy": true, 00:27:09.200 "get_zone_info": false, 00:27:09.200 "zone_management": false, 00:27:09.200 "zone_append": false, 00:27:09.200 "compare": false, 00:27:09.200 "compare_and_write": false, 00:27:09.200 "abort": true, 00:27:09.200 "seek_hole": false, 00:27:09.200 "seek_data": false, 00:27:09.200 "copy": true, 00:27:09.200 "nvme_iov_md": false 00:27:09.200 }, 00:27:09.200 "memory_domains": [ 00:27:09.200 { 00:27:09.200 "dma_device_id": "system", 00:27:09.200 "dma_device_type": 1 00:27:09.200 }, 00:27:09.200 { 00:27:09.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.200 "dma_device_type": 2 00:27:09.200 } 00:27:09.200 ], 00:27:09.200 "driver_specific": { 00:27:09.200 "passthru": { 00:27:09.200 "name": "pt1", 00:27:09.200 "base_bdev_name": "malloc1" 00:27:09.200 } 00:27:09.200 } 00:27:09.200 }' 00:27:09.200 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.200 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.457 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:09.457 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.457 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.457 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:09.457 10:42:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:09.457 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:09.457 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:09.457 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:09.457 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:09.457 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:09.457 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:09.457 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:09.457 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:09.714 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:09.714 "name": "pt2", 00:27:09.714 "aliases": [ 00:27:09.714 "00000000-0000-0000-0000-000000000002" 00:27:09.714 ], 00:27:09.714 "product_name": "passthru", 00:27:09.714 "block_size": 4128, 00:27:09.714 "num_blocks": 8192, 00:27:09.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:09.714 "md_size": 32, 00:27:09.714 "md_interleave": true, 00:27:09.714 "dif_type": 0, 00:27:09.714 "assigned_rate_limits": { 00:27:09.714 "rw_ios_per_sec": 0, 00:27:09.714 "rw_mbytes_per_sec": 0, 00:27:09.714 "r_mbytes_per_sec": 0, 00:27:09.714 "w_mbytes_per_sec": 0 00:27:09.714 }, 00:27:09.714 "claimed": true, 00:27:09.714 "claim_type": "exclusive_write", 00:27:09.714 "zoned": false, 00:27:09.714 "supported_io_types": { 00:27:09.714 "read": true, 00:27:09.714 "write": true, 00:27:09.714 "unmap": true, 00:27:09.714 "flush": true, 00:27:09.714 "reset": true, 00:27:09.714 "nvme_admin": false, 00:27:09.714 "nvme_io": false, 00:27:09.714 "nvme_io_md": false, 00:27:09.714 "write_zeroes": true, 00:27:09.714 "zcopy": true, 00:27:09.714 "get_zone_info": false, 00:27:09.714 "zone_management": false, 00:27:09.714 "zone_append": false, 00:27:09.714 "compare": false, 00:27:09.714 "compare_and_write": false, 00:27:09.714 "abort": true, 00:27:09.714 "seek_hole": false, 00:27:09.714 "seek_data": false, 00:27:09.714 "copy": true, 00:27:09.714 "nvme_iov_md": false 00:27:09.714 }, 00:27:09.714 "memory_domains": [ 00:27:09.714 { 00:27:09.714 "dma_device_id": "system", 00:27:09.714 "dma_device_type": 1 00:27:09.714 }, 00:27:09.714 { 00:27:09.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.714 "dma_device_type": 2 00:27:09.714 } 00:27:09.714 ], 00:27:09.714 "driver_specific": { 00:27:09.714 "passthru": { 00:27:09.714 "name": "pt2", 00:27:09.715 "base_bdev_name": "malloc2" 00:27:09.715 } 00:27:09.715 } 00:27:09.715 }' 00:27:09.715 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.715 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.972 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:09.972 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.972 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.972 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:09.972 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:09.972 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:09.972 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:09.972 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:09.972 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:09.972 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:09.972 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:09.972 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:27:10.230 [2024-07-25 10:42:13.849949] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:10.230 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 4fed4877-123f-45d9-bf29-e55759707cfd '!=' 4fed4877-123f-45d9-bf29-e55759707cfd ']' 00:27:10.230 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:27:10.230 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:10.230 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:27:10.230 10:42:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:10.488 [2024-07-25 10:42:14.090424] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:10.488 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:10.488 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:10.488 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:10.488 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:10.488 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:10.488 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:10.488 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.488 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.488 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.488 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.488 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.488 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.754 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:10.754 "name": "raid_bdev1", 00:27:10.754 "uuid": "4fed4877-123f-45d9-bf29-e55759707cfd", 00:27:10.754 "strip_size_kb": 0, 00:27:10.754 "state": "online", 00:27:10.754 "raid_level": "raid1", 00:27:10.754 "superblock": true, 00:27:10.754 "num_base_bdevs": 2, 00:27:10.754 "num_base_bdevs_discovered": 1, 00:27:10.754 "num_base_bdevs_operational": 1, 00:27:10.754 "base_bdevs_list": [ 00:27:10.754 { 00:27:10.754 "name": null, 00:27:10.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.754 "is_configured": false, 00:27:10.754 "data_offset": 256, 00:27:10.754 "data_size": 7936 00:27:10.754 }, 00:27:10.754 { 00:27:10.754 "name": "pt2", 00:27:10.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:10.754 "is_configured": true, 00:27:10.754 "data_offset": 256, 00:27:10.754 "data_size": 7936 00:27:10.754 } 00:27:10.754 ] 00:27:10.754 }' 00:27:10.754 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:10.754 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:11.327 10:42:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:11.647 [2024-07-25 10:42:15.141158] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:11.647 [2024-07-25 10:42:15.141198] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:11.647 [2024-07-25 10:42:15.141282] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:11.647 [2024-07-25 10:42:15.141341] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:11.647 [2024-07-25 10:42:15.141354] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbd7330 name raid_bdev1, state offline 00:27:11.647 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.647 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:27:11.904 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:27:11.904 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:27:11.904 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:27:11.904 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:11.904 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:12.163 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:12.163 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:12.163 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:27:12.163 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:12.163 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:27:12.163 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:12.422 [2024-07-25 10:42:15.935192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:12.422 [2024-07-25 10:42:15.935246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.422 [2024-07-25 10:42:15.935264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd5b490 00:27:12.422 [2024-07-25 10:42:15.935276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.422 [2024-07-25 10:42:15.936634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.422 [2024-07-25 10:42:15.936657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:12.422 [2024-07-25 10:42:15.936705] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:12.422 [2024-07-25 10:42:15.936736] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:12.422 [2024-07-25 10:42:15.936813] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xd5b8d0 00:27:12.422 [2024-07-25 10:42:15.936826] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:12.423 [2024-07-25 10:42:15.936877] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd5da20 00:27:12.423 [2024-07-25 10:42:15.936951] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd5b8d0 00:27:12.423 [2024-07-25 10:42:15.936963] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd5b8d0 00:27:12.423 [2024-07-25 10:42:15.937022] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:12.423 pt2 00:27:12.423 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:12.423 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:12.423 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:12.423 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:12.423 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:12.423 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:12.423 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:12.423 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:12.423 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:12.423 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:12.423 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.423 10:42:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.680 10:42:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:12.680 "name": "raid_bdev1", 00:27:12.680 "uuid": "4fed4877-123f-45d9-bf29-e55759707cfd", 00:27:12.680 "strip_size_kb": 0, 00:27:12.680 "state": "online", 00:27:12.680 "raid_level": "raid1", 00:27:12.680 "superblock": true, 00:27:12.680 "num_base_bdevs": 2, 00:27:12.680 "num_base_bdevs_discovered": 1, 00:27:12.680 "num_base_bdevs_operational": 1, 00:27:12.680 "base_bdevs_list": [ 00:27:12.680 { 00:27:12.680 "name": null, 00:27:12.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.680 "is_configured": false, 00:27:12.680 "data_offset": 256, 00:27:12.680 "data_size": 7936 00:27:12.680 }, 00:27:12.680 { 00:27:12.680 "name": "pt2", 00:27:12.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:12.680 "is_configured": true, 00:27:12.680 "data_offset": 256, 00:27:12.680 "data_size": 7936 00:27:12.680 } 00:27:12.680 ] 00:27:12.680 }' 00:27:12.680 10:42:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:12.680 10:42:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:13.243 10:42:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:13.501 [2024-07-25 10:42:16.986003] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:13.501 [2024-07-25 10:42:16.986033] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:13.501 [2024-07-25 10:42:16.986128] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:13.501 [2024-07-25 10:42:16.986186] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:13.501 [2024-07-25 10:42:16.986199] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd5b8d0 name raid_bdev1, state offline 00:27:13.501 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.501 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:27:13.758 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:27:13.758 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:27:13.758 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:27:13.758 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:14.018 [2024-07-25 10:42:17.483288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:14.018 [2024-07-25 10:42:17.483347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:14.018 [2024-07-25 10:42:17.483368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbd6da0 00:27:14.018 [2024-07-25 10:42:17.483380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:14.018 [2024-07-25 10:42:17.484757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:14.018 [2024-07-25 10:42:17.484781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:14.018 [2024-07-25 10:42:17.484832] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:14.018 [2024-07-25 10:42:17.484861] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:14.018 [2024-07-25 10:42:17.484950] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:14.019 [2024-07-25 10:42:17.484964] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:14.019 [2024-07-25 10:42:17.484980] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd5dbd0 name raid_bdev1, state configuring 00:27:14.019 [2024-07-25 10:42:17.485004] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:14.019 [2024-07-25 10:42:17.485072] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xd5de50 00:27:14.019 [2024-07-25 10:42:17.485084] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:14.019 [2024-07-25 10:42:17.485157] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbd7b40 00:27:14.019 [2024-07-25 10:42:17.485235] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd5de50 00:27:14.019 [2024-07-25 10:42:17.485248] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd5de50 00:27:14.019 [2024-07-25 10:42:17.485317] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:14.019 pt1 00:27:14.019 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:27:14.019 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:14.019 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:14.019 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:14.019 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:14.019 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:14.019 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:14.019 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:14.019 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:14.019 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:14.019 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:14.019 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.019 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.277 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:14.277 "name": "raid_bdev1", 00:27:14.277 "uuid": "4fed4877-123f-45d9-bf29-e55759707cfd", 00:27:14.277 "strip_size_kb": 0, 00:27:14.277 "state": "online", 00:27:14.277 "raid_level": "raid1", 00:27:14.277 "superblock": true, 00:27:14.277 "num_base_bdevs": 2, 00:27:14.277 "num_base_bdevs_discovered": 1, 00:27:14.277 "num_base_bdevs_operational": 1, 00:27:14.277 "base_bdevs_list": [ 00:27:14.277 { 00:27:14.277 "name": null, 00:27:14.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.277 "is_configured": false, 00:27:14.277 "data_offset": 256, 00:27:14.277 "data_size": 7936 00:27:14.277 }, 00:27:14.277 { 00:27:14.277 "name": "pt2", 00:27:14.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:14.277 "is_configured": true, 00:27:14.277 "data_offset": 256, 00:27:14.277 "data_size": 7936 00:27:14.277 } 00:27:14.277 ] 00:27:14.277 }' 00:27:14.277 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:14.277 10:42:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:14.841 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:14.841 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:14.841 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:27:14.841 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:14.841 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:27:15.099 [2024-07-25 10:42:18.766885] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:15.099 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 4fed4877-123f-45d9-bf29-e55759707cfd '!=' 4fed4877-123f-45d9-bf29-e55759707cfd ']' 00:27:15.099 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 2477531 00:27:15.099 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 2477531 ']' 00:27:15.099 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 2477531 00:27:15.099 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:27:15.099 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:15.099 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2477531 00:27:15.356 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:15.356 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:15.356 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2477531' 00:27:15.356 killing process with pid 2477531 00:27:15.356 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 2477531 00:27:15.356 [2024-07-25 10:42:18.808952] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:15.356 10:42:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 2477531 00:27:15.356 [2024-07-25 10:42:18.809020] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:15.356 [2024-07-25 10:42:18.809074] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:15.356 [2024-07-25 10:42:18.809111] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd5de50 name raid_bdev1, state offline 00:27:15.356 [2024-07-25 10:42:18.828445] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:15.356 10:42:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:27:15.356 00:27:15.356 real 0m14.647s 00:27:15.356 user 0m27.339s 00:27:15.356 sys 0m2.196s 00:27:15.356 10:42:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:15.356 10:42:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:15.356 ************************************ 00:27:15.356 END TEST raid_superblock_test_md_interleaved 00:27:15.356 ************************************ 00:27:15.614 10:42:19 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:27:15.614 10:42:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:15.614 10:42:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:15.614 10:42:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:15.614 ************************************ 00:27:15.614 START TEST raid_rebuild_test_sb_md_interleaved 00:27:15.614 ************************************ 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=2480067 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 2480067 /var/tmp/spdk-raid.sock 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 2480067 ']' 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:15.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:15.614 10:42:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:15.614 [2024-07-25 10:42:19.165727] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:15.614 [2024-07-25 10:42:19.165805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480067 ] 00:27:15.614 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:15.614 Zero copy mechanism will not be used. 00:27:15.614 [2024-07-25 10:42:19.242913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.871 [2024-07-25 10:42:19.353690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.871 [2024-07-25 10:42:19.425408] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:15.871 [2024-07-25 10:42:19.425435] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:16.436 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:16.436 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:27:16.436 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:16.436 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:27:16.693 BaseBdev1_malloc 00:27:16.693 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:16.950 [2024-07-25 10:42:20.608291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:16.950 [2024-07-25 10:42:20.608356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.950 [2024-07-25 10:42:20.608386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x104ad80 00:27:16.950 [2024-07-25 10:42:20.608421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.950 [2024-07-25 10:42:20.609856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.950 [2024-07-25 10:42:20.609879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:16.950 BaseBdev1 00:27:16.950 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:16.950 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:27:17.208 BaseBdev2_malloc 00:27:17.579 10:42:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:17.837 [2024-07-25 10:42:21.192623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:17.837 [2024-07-25 10:42:21.192689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.837 [2024-07-25 10:42:21.192723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11cb430 00:27:17.837 [2024-07-25 10:42:21.192739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.837 [2024-07-25 10:42:21.194371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.837 [2024-07-25 10:42:21.194399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:17.837 BaseBdev2 00:27:17.837 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:27:17.837 spare_malloc 00:27:17.837 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:18.094 spare_delay 00:27:18.094 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:18.351 [2024-07-25 10:42:21.974837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:18.351 [2024-07-25 10:42:21.974898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:18.351 [2024-07-25 10:42:21.974930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11cbd00 00:27:18.351 [2024-07-25 10:42:21.974955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:18.351 [2024-07-25 10:42:21.976380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:18.351 [2024-07-25 10:42:21.976420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:18.351 spare 00:27:18.351 10:42:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:18.610 [2024-07-25 10:42:22.219562] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:18.610 [2024-07-25 10:42:22.220907] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:18.610 [2024-07-25 10:42:22.221100] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x11d6550 00:27:18.610 [2024-07-25 10:42:22.221135] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:18.610 [2024-07-25 10:42:22.221228] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x104b810 00:27:18.610 [2024-07-25 10:42:22.221335] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11d6550 00:27:18.610 [2024-07-25 10:42:22.221354] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11d6550 00:27:18.610 [2024-07-25 10:42:22.221452] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:18.610 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:18.610 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:18.610 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:18.610 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:18.610 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:18.610 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:18.610 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:18.610 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:18.610 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:18.610 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:18.610 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.610 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.868 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:18.868 "name": "raid_bdev1", 00:27:18.868 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:18.868 "strip_size_kb": 0, 00:27:18.868 "state": "online", 00:27:18.868 "raid_level": "raid1", 00:27:18.868 "superblock": true, 00:27:18.868 "num_base_bdevs": 2, 00:27:18.868 "num_base_bdevs_discovered": 2, 00:27:18.868 "num_base_bdevs_operational": 2, 00:27:18.868 "base_bdevs_list": [ 00:27:18.868 { 00:27:18.868 "name": "BaseBdev1", 00:27:18.868 "uuid": "78492cd0-444c-52ee-816e-c17020a545a7", 00:27:18.868 "is_configured": true, 00:27:18.868 "data_offset": 256, 00:27:18.868 "data_size": 7936 00:27:18.868 }, 00:27:18.868 { 00:27:18.868 "name": "BaseBdev2", 00:27:18.868 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:18.868 "is_configured": true, 00:27:18.868 "data_offset": 256, 00:27:18.868 "data_size": 7936 00:27:18.868 } 00:27:18.868 ] 00:27:18.868 }' 00:27:18.868 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:18.868 10:42:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:19.433 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:19.433 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:19.691 [2024-07-25 10:42:23.246493] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:19.691 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:27:19.691 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.692 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:19.948 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:27:19.948 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:19.948 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:27:19.948 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:20.206 [2024-07-25 10:42:23.747593] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:20.206 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:20.206 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:20.206 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:20.206 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:20.206 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:20.206 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:20.206 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:20.206 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:20.206 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:20.206 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:20.206 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.206 10:42:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.464 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:20.464 "name": "raid_bdev1", 00:27:20.464 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:20.464 "strip_size_kb": 0, 00:27:20.464 "state": "online", 00:27:20.464 "raid_level": "raid1", 00:27:20.464 "superblock": true, 00:27:20.464 "num_base_bdevs": 2, 00:27:20.464 "num_base_bdevs_discovered": 1, 00:27:20.464 "num_base_bdevs_operational": 1, 00:27:20.464 "base_bdevs_list": [ 00:27:20.464 { 00:27:20.464 "name": null, 00:27:20.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:20.464 "is_configured": false, 00:27:20.464 "data_offset": 256, 00:27:20.464 "data_size": 7936 00:27:20.464 }, 00:27:20.464 { 00:27:20.464 "name": "BaseBdev2", 00:27:20.464 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:20.464 "is_configured": true, 00:27:20.464 "data_offset": 256, 00:27:20.464 "data_size": 7936 00:27:20.464 } 00:27:20.464 ] 00:27:20.464 }' 00:27:20.464 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:20.464 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:21.029 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:21.287 [2024-07-25 10:42:24.806442] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:21.287 [2024-07-25 10:42:24.811176] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11d79a0 00:27:21.287 [2024-07-25 10:42:24.813223] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:21.287 10:42:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:22.219 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:22.219 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:22.219 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:22.219 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:22.219 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:22.219 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.219 10:42:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.477 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:22.477 "name": "raid_bdev1", 00:27:22.477 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:22.477 "strip_size_kb": 0, 00:27:22.477 "state": "online", 00:27:22.477 "raid_level": "raid1", 00:27:22.477 "superblock": true, 00:27:22.477 "num_base_bdevs": 2, 00:27:22.477 "num_base_bdevs_discovered": 2, 00:27:22.477 "num_base_bdevs_operational": 2, 00:27:22.477 "process": { 00:27:22.477 "type": "rebuild", 00:27:22.477 "target": "spare", 00:27:22.477 "progress": { 00:27:22.477 "blocks": 3072, 00:27:22.477 "percent": 38 00:27:22.477 } 00:27:22.477 }, 00:27:22.477 "base_bdevs_list": [ 00:27:22.477 { 00:27:22.477 "name": "spare", 00:27:22.477 "uuid": "27df6a4d-952c-5cec-82fa-1adbb39feebd", 00:27:22.477 "is_configured": true, 00:27:22.477 "data_offset": 256, 00:27:22.477 "data_size": 7936 00:27:22.477 }, 00:27:22.477 { 00:27:22.477 "name": "BaseBdev2", 00:27:22.477 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:22.477 "is_configured": true, 00:27:22.477 "data_offset": 256, 00:27:22.477 "data_size": 7936 00:27:22.477 } 00:27:22.477 ] 00:27:22.477 }' 00:27:22.477 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:22.477 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:22.477 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:22.477 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:22.477 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:22.735 [2024-07-25 10:42:26.384515] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:22.735 [2024-07-25 10:42:26.426606] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:22.735 [2024-07-25 10:42:26.426663] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.735 [2024-07-25 10:42:26.426684] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:22.735 [2024-07-25 10:42:26.426695] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:22.992 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:22.992 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:22.992 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:22.992 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:22.992 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:22.993 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:22.993 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:22.993 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:22.993 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:22.993 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:22.993 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.993 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.993 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:22.993 "name": "raid_bdev1", 00:27:22.993 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:22.993 "strip_size_kb": 0, 00:27:22.993 "state": "online", 00:27:22.993 "raid_level": "raid1", 00:27:22.993 "superblock": true, 00:27:22.993 "num_base_bdevs": 2, 00:27:22.993 "num_base_bdevs_discovered": 1, 00:27:22.993 "num_base_bdevs_operational": 1, 00:27:22.993 "base_bdevs_list": [ 00:27:22.993 { 00:27:22.993 "name": null, 00:27:22.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.993 "is_configured": false, 00:27:22.993 "data_offset": 256, 00:27:22.993 "data_size": 7936 00:27:22.993 }, 00:27:22.993 { 00:27:22.993 "name": "BaseBdev2", 00:27:22.993 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:22.993 "is_configured": true, 00:27:22.993 "data_offset": 256, 00:27:22.993 "data_size": 7936 00:27:22.993 } 00:27:22.993 ] 00:27:22.993 }' 00:27:22.993 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:22.993 10:42:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:23.557 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:23.557 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:23.557 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:23.557 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:23.557 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:23.557 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.557 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.815 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:23.815 "name": "raid_bdev1", 00:27:23.815 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:23.815 "strip_size_kb": 0, 00:27:23.815 "state": "online", 00:27:23.815 "raid_level": "raid1", 00:27:23.815 "superblock": true, 00:27:23.815 "num_base_bdevs": 2, 00:27:23.815 "num_base_bdevs_discovered": 1, 00:27:23.815 "num_base_bdevs_operational": 1, 00:27:23.815 "base_bdevs_list": [ 00:27:23.815 { 00:27:23.815 "name": null, 00:27:23.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.815 "is_configured": false, 00:27:23.815 "data_offset": 256, 00:27:23.815 "data_size": 7936 00:27:23.815 }, 00:27:23.815 { 00:27:23.815 "name": "BaseBdev2", 00:27:23.815 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:23.815 "is_configured": true, 00:27:23.815 "data_offset": 256, 00:27:23.815 "data_size": 7936 00:27:23.815 } 00:27:23.815 ] 00:27:23.815 }' 00:27:23.815 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:23.815 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:23.815 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:24.072 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:24.072 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:24.330 [2024-07-25 10:42:27.787035] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:24.330 [2024-07-25 10:42:27.791902] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1043190 00:27:24.330 [2024-07-25 10:42:27.793308] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:24.330 10:42:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:25.264 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.264 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:25.264 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:25.264 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:25.264 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:25.264 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.264 10:42:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:25.522 "name": "raid_bdev1", 00:27:25.522 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:25.522 "strip_size_kb": 0, 00:27:25.522 "state": "online", 00:27:25.522 "raid_level": "raid1", 00:27:25.522 "superblock": true, 00:27:25.522 "num_base_bdevs": 2, 00:27:25.522 "num_base_bdevs_discovered": 2, 00:27:25.522 "num_base_bdevs_operational": 2, 00:27:25.522 "process": { 00:27:25.522 "type": "rebuild", 00:27:25.522 "target": "spare", 00:27:25.522 "progress": { 00:27:25.522 "blocks": 3072, 00:27:25.522 "percent": 38 00:27:25.522 } 00:27:25.522 }, 00:27:25.522 "base_bdevs_list": [ 00:27:25.522 { 00:27:25.522 "name": "spare", 00:27:25.522 "uuid": "27df6a4d-952c-5cec-82fa-1adbb39feebd", 00:27:25.522 "is_configured": true, 00:27:25.522 "data_offset": 256, 00:27:25.522 "data_size": 7936 00:27:25.522 }, 00:27:25.522 { 00:27:25.522 "name": "BaseBdev2", 00:27:25.522 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:25.522 "is_configured": true, 00:27:25.522 "data_offset": 256, 00:27:25.522 "data_size": 7936 00:27:25.522 } 00:27:25.522 ] 00:27:25.522 }' 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:27:25.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1113 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:25.522 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:25.523 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:25.523 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.523 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.780 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:25.780 "name": "raid_bdev1", 00:27:25.780 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:25.780 "strip_size_kb": 0, 00:27:25.780 "state": "online", 00:27:25.780 "raid_level": "raid1", 00:27:25.780 "superblock": true, 00:27:25.780 "num_base_bdevs": 2, 00:27:25.780 "num_base_bdevs_discovered": 2, 00:27:25.780 "num_base_bdevs_operational": 2, 00:27:25.780 "process": { 00:27:25.780 "type": "rebuild", 00:27:25.780 "target": "spare", 00:27:25.780 "progress": { 00:27:25.780 "blocks": 3840, 00:27:25.780 "percent": 48 00:27:25.780 } 00:27:25.780 }, 00:27:25.780 "base_bdevs_list": [ 00:27:25.780 { 00:27:25.781 "name": "spare", 00:27:25.781 "uuid": "27df6a4d-952c-5cec-82fa-1adbb39feebd", 00:27:25.781 "is_configured": true, 00:27:25.781 "data_offset": 256, 00:27:25.781 "data_size": 7936 00:27:25.781 }, 00:27:25.781 { 00:27:25.781 "name": "BaseBdev2", 00:27:25.781 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:25.781 "is_configured": true, 00:27:25.781 "data_offset": 256, 00:27:25.781 "data_size": 7936 00:27:25.781 } 00:27:25.781 ] 00:27:25.781 }' 00:27:25.781 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:25.781 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:25.781 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:25.781 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:25.781 10:42:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:27.154 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:27.154 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:27.154 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:27.154 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:27.154 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:27.154 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:27.154 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.154 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.154 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:27.154 "name": "raid_bdev1", 00:27:27.154 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:27.154 "strip_size_kb": 0, 00:27:27.154 "state": "online", 00:27:27.154 "raid_level": "raid1", 00:27:27.154 "superblock": true, 00:27:27.154 "num_base_bdevs": 2, 00:27:27.154 "num_base_bdevs_discovered": 2, 00:27:27.154 "num_base_bdevs_operational": 2, 00:27:27.154 "process": { 00:27:27.154 "type": "rebuild", 00:27:27.154 "target": "spare", 00:27:27.154 "progress": { 00:27:27.154 "blocks": 7424, 00:27:27.154 "percent": 93 00:27:27.154 } 00:27:27.154 }, 00:27:27.154 "base_bdevs_list": [ 00:27:27.154 { 00:27:27.154 "name": "spare", 00:27:27.154 "uuid": "27df6a4d-952c-5cec-82fa-1adbb39feebd", 00:27:27.154 "is_configured": true, 00:27:27.154 "data_offset": 256, 00:27:27.154 "data_size": 7936 00:27:27.154 }, 00:27:27.154 { 00:27:27.154 "name": "BaseBdev2", 00:27:27.154 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:27.154 "is_configured": true, 00:27:27.154 "data_offset": 256, 00:27:27.154 "data_size": 7936 00:27:27.154 } 00:27:27.154 ] 00:27:27.155 }' 00:27:27.155 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:27.155 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:27.155 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:27.155 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:27.155 10:42:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:27.412 [2024-07-25 10:42:30.918488] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:27.412 [2024-07-25 10:42:30.918549] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:27.412 [2024-07-25 10:42:30.918654] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:28.344 10:42:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:28.344 10:42:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:28.344 10:42:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:28.344 10:42:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:28.344 10:42:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:28.344 10:42:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:28.344 10:42:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.344 10:42:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.637 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:28.637 "name": "raid_bdev1", 00:27:28.637 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:28.637 "strip_size_kb": 0, 00:27:28.637 "state": "online", 00:27:28.637 "raid_level": "raid1", 00:27:28.637 "superblock": true, 00:27:28.637 "num_base_bdevs": 2, 00:27:28.637 "num_base_bdevs_discovered": 2, 00:27:28.637 "num_base_bdevs_operational": 2, 00:27:28.637 "base_bdevs_list": [ 00:27:28.637 { 00:27:28.637 "name": "spare", 00:27:28.637 "uuid": "27df6a4d-952c-5cec-82fa-1adbb39feebd", 00:27:28.637 "is_configured": true, 00:27:28.637 "data_offset": 256, 00:27:28.637 "data_size": 7936 00:27:28.637 }, 00:27:28.637 { 00:27:28.637 "name": "BaseBdev2", 00:27:28.637 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:28.637 "is_configured": true, 00:27:28.637 "data_offset": 256, 00:27:28.637 "data_size": 7936 00:27:28.637 } 00:27:28.637 ] 00:27:28.637 }' 00:27:28.637 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:28.637 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:28.637 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:28.637 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:28.637 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:27:28.637 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:28.637 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:28.637 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:28.637 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:28.637 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:28.638 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.638 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:28.895 "name": "raid_bdev1", 00:27:28.895 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:28.895 "strip_size_kb": 0, 00:27:28.895 "state": "online", 00:27:28.895 "raid_level": "raid1", 00:27:28.895 "superblock": true, 00:27:28.895 "num_base_bdevs": 2, 00:27:28.895 "num_base_bdevs_discovered": 2, 00:27:28.895 "num_base_bdevs_operational": 2, 00:27:28.895 "base_bdevs_list": [ 00:27:28.895 { 00:27:28.895 "name": "spare", 00:27:28.895 "uuid": "27df6a4d-952c-5cec-82fa-1adbb39feebd", 00:27:28.895 "is_configured": true, 00:27:28.895 "data_offset": 256, 00:27:28.895 "data_size": 7936 00:27:28.895 }, 00:27:28.895 { 00:27:28.895 "name": "BaseBdev2", 00:27:28.895 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:28.895 "is_configured": true, 00:27:28.895 "data_offset": 256, 00:27:28.895 "data_size": 7936 00:27:28.895 } 00:27:28.895 ] 00:27:28.895 }' 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.895 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.153 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:29.153 "name": "raid_bdev1", 00:27:29.153 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:29.153 "strip_size_kb": 0, 00:27:29.153 "state": "online", 00:27:29.153 "raid_level": "raid1", 00:27:29.153 "superblock": true, 00:27:29.153 "num_base_bdevs": 2, 00:27:29.153 "num_base_bdevs_discovered": 2, 00:27:29.153 "num_base_bdevs_operational": 2, 00:27:29.153 "base_bdevs_list": [ 00:27:29.153 { 00:27:29.153 "name": "spare", 00:27:29.153 "uuid": "27df6a4d-952c-5cec-82fa-1adbb39feebd", 00:27:29.153 "is_configured": true, 00:27:29.153 "data_offset": 256, 00:27:29.153 "data_size": 7936 00:27:29.153 }, 00:27:29.153 { 00:27:29.153 "name": "BaseBdev2", 00:27:29.153 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:29.153 "is_configured": true, 00:27:29.153 "data_offset": 256, 00:27:29.153 "data_size": 7936 00:27:29.153 } 00:27:29.153 ] 00:27:29.153 }' 00:27:29.153 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:29.153 10:42:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:29.718 10:42:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:29.975 [2024-07-25 10:42:33.502715] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:29.975 [2024-07-25 10:42:33.502744] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:29.975 [2024-07-25 10:42:33.502821] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:29.975 [2024-07-25 10:42:33.502893] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:29.975 [2024-07-25 10:42:33.502909] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11d6550 name raid_bdev1, state offline 00:27:29.975 10:42:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.975 10:42:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:27:30.232 10:42:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:30.232 10:42:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:27:30.232 10:42:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:27:30.232 10:42:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:30.490 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:30.748 [2024-07-25 10:42:34.256720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:30.748 [2024-07-25 10:42:34.256781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:30.748 [2024-07-25 10:42:34.256813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1042900 00:27:30.748 [2024-07-25 10:42:34.256829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:30.748 [2024-07-25 10:42:34.258525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:30.748 [2024-07-25 10:42:34.258561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:30.748 [2024-07-25 10:42:34.258644] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:30.748 [2024-07-25 10:42:34.258680] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:30.748 [2024-07-25 10:42:34.258812] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:30.748 spare 00:27:30.748 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:30.748 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:30.748 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:30.748 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:30.748 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:30.748 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:30.748 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:30.748 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:30.748 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:30.748 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:30.748 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.748 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.748 [2024-07-25 10:42:34.359150] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1041060 00:27:30.748 [2024-07-25 10:42:34.359175] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:30.748 [2024-07-25 10:42:34.359283] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1041610 00:27:30.748 [2024-07-25 10:42:34.359426] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1041060 00:27:30.748 [2024-07-25 10:42:34.359441] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1041060 00:27:30.748 [2024-07-25 10:42:34.359530] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:31.006 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:31.006 "name": "raid_bdev1", 00:27:31.006 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:31.006 "strip_size_kb": 0, 00:27:31.006 "state": "online", 00:27:31.006 "raid_level": "raid1", 00:27:31.006 "superblock": true, 00:27:31.006 "num_base_bdevs": 2, 00:27:31.006 "num_base_bdevs_discovered": 2, 00:27:31.006 "num_base_bdevs_operational": 2, 00:27:31.006 "base_bdevs_list": [ 00:27:31.006 { 00:27:31.006 "name": "spare", 00:27:31.006 "uuid": "27df6a4d-952c-5cec-82fa-1adbb39feebd", 00:27:31.006 "is_configured": true, 00:27:31.006 "data_offset": 256, 00:27:31.006 "data_size": 7936 00:27:31.006 }, 00:27:31.006 { 00:27:31.006 "name": "BaseBdev2", 00:27:31.006 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:31.006 "is_configured": true, 00:27:31.006 "data_offset": 256, 00:27:31.006 "data_size": 7936 00:27:31.006 } 00:27:31.006 ] 00:27:31.006 }' 00:27:31.006 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:31.006 10:42:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:31.618 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:31.618 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:31.618 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:31.618 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:31.618 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:31.618 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.618 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.876 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:31.876 "name": "raid_bdev1", 00:27:31.876 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:31.876 "strip_size_kb": 0, 00:27:31.876 "state": "online", 00:27:31.876 "raid_level": "raid1", 00:27:31.876 "superblock": true, 00:27:31.876 "num_base_bdevs": 2, 00:27:31.876 "num_base_bdevs_discovered": 2, 00:27:31.876 "num_base_bdevs_operational": 2, 00:27:31.876 "base_bdevs_list": [ 00:27:31.876 { 00:27:31.876 "name": "spare", 00:27:31.876 "uuid": "27df6a4d-952c-5cec-82fa-1adbb39feebd", 00:27:31.876 "is_configured": true, 00:27:31.876 "data_offset": 256, 00:27:31.876 "data_size": 7936 00:27:31.876 }, 00:27:31.876 { 00:27:31.876 "name": "BaseBdev2", 00:27:31.876 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:31.876 "is_configured": true, 00:27:31.876 "data_offset": 256, 00:27:31.876 "data_size": 7936 00:27:31.876 } 00:27:31.876 ] 00:27:31.876 }' 00:27:31.876 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:31.876 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:31.876 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:31.876 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:31.876 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.876 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:32.134 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:27:32.134 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:32.391 [2024-07-25 10:42:35.929250] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:32.391 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:32.391 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:32.391 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:32.391 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:32.391 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:32.391 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:32.391 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:32.391 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:32.391 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:32.391 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:32.391 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.391 10:42:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.648 10:42:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:32.648 "name": "raid_bdev1", 00:27:32.648 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:32.648 "strip_size_kb": 0, 00:27:32.648 "state": "online", 00:27:32.648 "raid_level": "raid1", 00:27:32.648 "superblock": true, 00:27:32.648 "num_base_bdevs": 2, 00:27:32.648 "num_base_bdevs_discovered": 1, 00:27:32.648 "num_base_bdevs_operational": 1, 00:27:32.648 "base_bdevs_list": [ 00:27:32.648 { 00:27:32.648 "name": null, 00:27:32.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.648 "is_configured": false, 00:27:32.648 "data_offset": 256, 00:27:32.648 "data_size": 7936 00:27:32.648 }, 00:27:32.648 { 00:27:32.648 "name": "BaseBdev2", 00:27:32.648 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:32.648 "is_configured": true, 00:27:32.648 "data_offset": 256, 00:27:32.648 "data_size": 7936 00:27:32.648 } 00:27:32.648 ] 00:27:32.648 }' 00:27:32.648 10:42:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:32.649 10:42:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:33.214 10:42:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:33.472 [2024-07-25 10:42:36.956009] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:33.472 [2024-07-25 10:42:36.956219] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:33.472 [2024-07-25 10:42:36.956241] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:33.472 [2024-07-25 10:42:36.956274] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:33.472 [2024-07-25 10:42:36.960823] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11d79a0 00:27:33.472 [2024-07-25 10:42:36.962949] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:33.472 10:42:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:34.407 10:42:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:34.407 10:42:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:34.407 10:42:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:34.407 10:42:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:34.407 10:42:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:34.407 10:42:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.407 10:42:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.663 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:34.663 "name": "raid_bdev1", 00:27:34.663 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:34.663 "strip_size_kb": 0, 00:27:34.663 "state": "online", 00:27:34.663 "raid_level": "raid1", 00:27:34.663 "superblock": true, 00:27:34.663 "num_base_bdevs": 2, 00:27:34.663 "num_base_bdevs_discovered": 2, 00:27:34.663 "num_base_bdevs_operational": 2, 00:27:34.663 "process": { 00:27:34.663 "type": "rebuild", 00:27:34.663 "target": "spare", 00:27:34.663 "progress": { 00:27:34.663 "blocks": 3072, 00:27:34.663 "percent": 38 00:27:34.663 } 00:27:34.663 }, 00:27:34.663 "base_bdevs_list": [ 00:27:34.663 { 00:27:34.663 "name": "spare", 00:27:34.663 "uuid": "27df6a4d-952c-5cec-82fa-1adbb39feebd", 00:27:34.663 "is_configured": true, 00:27:34.663 "data_offset": 256, 00:27:34.663 "data_size": 7936 00:27:34.663 }, 00:27:34.663 { 00:27:34.663 "name": "BaseBdev2", 00:27:34.663 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:34.663 "is_configured": true, 00:27:34.663 "data_offset": 256, 00:27:34.663 "data_size": 7936 00:27:34.663 } 00:27:34.663 ] 00:27:34.663 }' 00:27:34.663 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:34.663 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:34.663 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:34.663 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:34.664 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:34.922 [2024-07-25 10:42:38.514682] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:34.922 [2024-07-25 10:42:38.576325] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:34.922 [2024-07-25 10:42:38.576382] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:34.922 [2024-07-25 10:42:38.576403] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:34.922 [2024-07-25 10:42:38.576413] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:34.922 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:34.922 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:34.922 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:34.922 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:34.922 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:34.922 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:34.922 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:34.922 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:34.922 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:34.922 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:34.922 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.922 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.181 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:35.181 "name": "raid_bdev1", 00:27:35.181 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:35.181 "strip_size_kb": 0, 00:27:35.181 "state": "online", 00:27:35.181 "raid_level": "raid1", 00:27:35.181 "superblock": true, 00:27:35.181 "num_base_bdevs": 2, 00:27:35.181 "num_base_bdevs_discovered": 1, 00:27:35.181 "num_base_bdevs_operational": 1, 00:27:35.181 "base_bdevs_list": [ 00:27:35.181 { 00:27:35.181 "name": null, 00:27:35.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:35.181 "is_configured": false, 00:27:35.181 "data_offset": 256, 00:27:35.181 "data_size": 7936 00:27:35.181 }, 00:27:35.181 { 00:27:35.181 "name": "BaseBdev2", 00:27:35.181 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:35.181 "is_configured": true, 00:27:35.181 "data_offset": 256, 00:27:35.181 "data_size": 7936 00:27:35.181 } 00:27:35.181 ] 00:27:35.181 }' 00:27:35.181 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:35.181 10:42:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:35.748 10:42:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:36.007 [2024-07-25 10:42:39.631521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:36.007 [2024-07-25 10:42:39.631584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:36.007 [2024-07-25 10:42:39.631616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1042050 00:27:36.007 [2024-07-25 10:42:39.631632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:36.007 [2024-07-25 10:42:39.631888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:36.007 [2024-07-25 10:42:39.631913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:36.007 [2024-07-25 10:42:39.631977] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:36.007 [2024-07-25 10:42:39.631994] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:36.007 [2024-07-25 10:42:39.632013] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:36.007 [2024-07-25 10:42:39.632036] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:36.007 [2024-07-25 10:42:39.636750] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1043190 00:27:36.007 spare 00:27:36.007 [2024-07-25 10:42:39.638298] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:36.007 10:42:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:27:37.383 10:42:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:37.383 10:42:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:37.383 10:42:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:37.383 10:42:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:37.383 10:42:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:37.383 10:42:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.383 10:42:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.383 10:42:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:37.383 "name": "raid_bdev1", 00:27:37.383 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:37.383 "strip_size_kb": 0, 00:27:37.383 "state": "online", 00:27:37.383 "raid_level": "raid1", 00:27:37.383 "superblock": true, 00:27:37.383 "num_base_bdevs": 2, 00:27:37.383 "num_base_bdevs_discovered": 2, 00:27:37.383 "num_base_bdevs_operational": 2, 00:27:37.383 "process": { 00:27:37.383 "type": "rebuild", 00:27:37.383 "target": "spare", 00:27:37.383 "progress": { 00:27:37.383 "blocks": 3072, 00:27:37.383 "percent": 38 00:27:37.383 } 00:27:37.383 }, 00:27:37.383 "base_bdevs_list": [ 00:27:37.383 { 00:27:37.383 "name": "spare", 00:27:37.383 "uuid": "27df6a4d-952c-5cec-82fa-1adbb39feebd", 00:27:37.383 "is_configured": true, 00:27:37.383 "data_offset": 256, 00:27:37.383 "data_size": 7936 00:27:37.383 }, 00:27:37.383 { 00:27:37.383 "name": "BaseBdev2", 00:27:37.383 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:37.383 "is_configured": true, 00:27:37.383 "data_offset": 256, 00:27:37.383 "data_size": 7936 00:27:37.383 } 00:27:37.383 ] 00:27:37.383 }' 00:27:37.383 10:42:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:37.383 10:42:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:37.383 10:42:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:37.383 10:42:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:37.383 10:42:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:37.642 [2024-07-25 10:42:41.208828] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:37.642 [2024-07-25 10:42:41.251456] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:37.642 [2024-07-25 10:42:41.251512] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:37.642 [2024-07-25 10:42:41.251533] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:37.642 [2024-07-25 10:42:41.251544] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:37.642 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:37.642 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:37.642 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:37.642 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:37.642 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:37.642 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:37.642 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:37.642 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:37.642 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:37.642 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:37.642 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.642 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.900 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:37.901 "name": "raid_bdev1", 00:27:37.901 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:37.901 "strip_size_kb": 0, 00:27:37.901 "state": "online", 00:27:37.901 "raid_level": "raid1", 00:27:37.901 "superblock": true, 00:27:37.901 "num_base_bdevs": 2, 00:27:37.901 "num_base_bdevs_discovered": 1, 00:27:37.901 "num_base_bdevs_operational": 1, 00:27:37.901 "base_bdevs_list": [ 00:27:37.901 { 00:27:37.901 "name": null, 00:27:37.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.901 "is_configured": false, 00:27:37.901 "data_offset": 256, 00:27:37.901 "data_size": 7936 00:27:37.901 }, 00:27:37.901 { 00:27:37.901 "name": "BaseBdev2", 00:27:37.901 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:37.901 "is_configured": true, 00:27:37.901 "data_offset": 256, 00:27:37.901 "data_size": 7936 00:27:37.901 } 00:27:37.901 ] 00:27:37.901 }' 00:27:37.901 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:37.901 10:42:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:38.466 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:38.466 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:38.466 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:38.466 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:38.466 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:38.466 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.466 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.724 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:38.724 "name": "raid_bdev1", 00:27:38.724 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:38.724 "strip_size_kb": 0, 00:27:38.724 "state": "online", 00:27:38.724 "raid_level": "raid1", 00:27:38.724 "superblock": true, 00:27:38.724 "num_base_bdevs": 2, 00:27:38.724 "num_base_bdevs_discovered": 1, 00:27:38.724 "num_base_bdevs_operational": 1, 00:27:38.724 "base_bdevs_list": [ 00:27:38.724 { 00:27:38.724 "name": null, 00:27:38.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.724 "is_configured": false, 00:27:38.724 "data_offset": 256, 00:27:38.724 "data_size": 7936 00:27:38.724 }, 00:27:38.724 { 00:27:38.724 "name": "BaseBdev2", 00:27:38.724 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:38.724 "is_configured": true, 00:27:38.724 "data_offset": 256, 00:27:38.724 "data_size": 7936 00:27:38.724 } 00:27:38.724 ] 00:27:38.724 }' 00:27:38.724 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:38.724 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:38.724 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:38.724 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:38.724 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:38.982 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:39.239 [2024-07-25 10:42:42.897169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:39.240 [2024-07-25 10:42:42.897218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.240 [2024-07-25 10:42:42.897245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x104afb0 00:27:39.240 [2024-07-25 10:42:42.897261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.240 [2024-07-25 10:42:42.897474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.240 [2024-07-25 10:42:42.897499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:39.240 [2024-07-25 10:42:42.897554] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:39.240 [2024-07-25 10:42:42.897571] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:39.240 [2024-07-25 10:42:42.897581] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:39.240 BaseBdev1 00:27:39.240 10:42:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:27:40.611 10:42:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:40.611 10:42:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:40.611 10:42:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:40.611 10:42:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:40.611 10:42:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:40.611 10:42:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:40.611 10:42:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:40.611 10:42:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:40.611 10:42:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:40.611 10:42:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:40.611 10:42:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.611 10:42:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.611 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:40.611 "name": "raid_bdev1", 00:27:40.611 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:40.611 "strip_size_kb": 0, 00:27:40.611 "state": "online", 00:27:40.611 "raid_level": "raid1", 00:27:40.611 "superblock": true, 00:27:40.611 "num_base_bdevs": 2, 00:27:40.611 "num_base_bdevs_discovered": 1, 00:27:40.611 "num_base_bdevs_operational": 1, 00:27:40.611 "base_bdevs_list": [ 00:27:40.611 { 00:27:40.611 "name": null, 00:27:40.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.611 "is_configured": false, 00:27:40.611 "data_offset": 256, 00:27:40.611 "data_size": 7936 00:27:40.611 }, 00:27:40.611 { 00:27:40.611 "name": "BaseBdev2", 00:27:40.611 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:40.611 "is_configured": true, 00:27:40.611 "data_offset": 256, 00:27:40.611 "data_size": 7936 00:27:40.611 } 00:27:40.611 ] 00:27:40.611 }' 00:27:40.611 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:40.611 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:41.177 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:41.177 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:41.177 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:41.177 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:41.177 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:41.177 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.177 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.435 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:41.435 "name": "raid_bdev1", 00:27:41.435 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:41.435 "strip_size_kb": 0, 00:27:41.435 "state": "online", 00:27:41.435 "raid_level": "raid1", 00:27:41.435 "superblock": true, 00:27:41.435 "num_base_bdevs": 2, 00:27:41.435 "num_base_bdevs_discovered": 1, 00:27:41.435 "num_base_bdevs_operational": 1, 00:27:41.435 "base_bdevs_list": [ 00:27:41.435 { 00:27:41.435 "name": null, 00:27:41.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.435 "is_configured": false, 00:27:41.435 "data_offset": 256, 00:27:41.435 "data_size": 7936 00:27:41.435 }, 00:27:41.435 { 00:27:41.435 "name": "BaseBdev2", 00:27:41.435 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:41.435 "is_configured": true, 00:27:41.435 "data_offset": 256, 00:27:41.435 "data_size": 7936 00:27:41.435 } 00:27:41.435 ] 00:27:41.435 }' 00:27:41.435 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:41.435 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:41.435 10:42:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:41.435 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:41.435 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:41.435 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:27:41.435 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:41.435 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:41.435 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.435 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:41.435 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.435 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:41.435 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.435 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:41.435 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:41.435 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:41.693 [2024-07-25 10:42:45.251491] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:41.693 [2024-07-25 10:42:45.251670] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:41.693 [2024-07-25 10:42:45.251693] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:41.693 request: 00:27:41.693 { 00:27:41.693 "base_bdev": "BaseBdev1", 00:27:41.693 "raid_bdev": "raid_bdev1", 00:27:41.693 "method": "bdev_raid_add_base_bdev", 00:27:41.693 "req_id": 1 00:27:41.693 } 00:27:41.693 Got JSON-RPC error response 00:27:41.693 response: 00:27:41.693 { 00:27:41.693 "code": -22, 00:27:41.693 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:41.693 } 00:27:41.693 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:27:41.693 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:41.693 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:41.693 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:41.693 10:42:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:27:42.626 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:42.626 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:42.626 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:42.626 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:42.626 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:42.626 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:42.626 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:42.626 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:42.626 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:42.626 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:42.626 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.626 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.883 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:42.883 "name": "raid_bdev1", 00:27:42.883 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:42.883 "strip_size_kb": 0, 00:27:42.883 "state": "online", 00:27:42.883 "raid_level": "raid1", 00:27:42.883 "superblock": true, 00:27:42.883 "num_base_bdevs": 2, 00:27:42.883 "num_base_bdevs_discovered": 1, 00:27:42.883 "num_base_bdevs_operational": 1, 00:27:42.883 "base_bdevs_list": [ 00:27:42.883 { 00:27:42.883 "name": null, 00:27:42.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.883 "is_configured": false, 00:27:42.883 "data_offset": 256, 00:27:42.883 "data_size": 7936 00:27:42.883 }, 00:27:42.883 { 00:27:42.883 "name": "BaseBdev2", 00:27:42.883 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:42.883 "is_configured": true, 00:27:42.883 "data_offset": 256, 00:27:42.883 "data_size": 7936 00:27:42.883 } 00:27:42.883 ] 00:27:42.883 }' 00:27:42.883 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:42.883 10:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:43.448 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:43.448 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:43.448 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:43.448 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:43.448 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:43.448 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.448 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:43.706 "name": "raid_bdev1", 00:27:43.706 "uuid": "9511a2e5-b51b-4e05-b0d8-23f9e42fa5ce", 00:27:43.706 "strip_size_kb": 0, 00:27:43.706 "state": "online", 00:27:43.706 "raid_level": "raid1", 00:27:43.706 "superblock": true, 00:27:43.706 "num_base_bdevs": 2, 00:27:43.706 "num_base_bdevs_discovered": 1, 00:27:43.706 "num_base_bdevs_operational": 1, 00:27:43.706 "base_bdevs_list": [ 00:27:43.706 { 00:27:43.706 "name": null, 00:27:43.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.706 "is_configured": false, 00:27:43.706 "data_offset": 256, 00:27:43.706 "data_size": 7936 00:27:43.706 }, 00:27:43.706 { 00:27:43.706 "name": "BaseBdev2", 00:27:43.706 "uuid": "ae97ee85-0a48-58f1-aaf8-14fc8a84afa0", 00:27:43.706 "is_configured": true, 00:27:43.706 "data_offset": 256, 00:27:43.706 "data_size": 7936 00:27:43.706 } 00:27:43.706 ] 00:27:43.706 }' 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 2480067 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 2480067 ']' 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 2480067 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2480067 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2480067' 00:27:43.706 killing process with pid 2480067 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 2480067 00:27:43.706 Received shutdown signal, test time was about 60.000000 seconds 00:27:43.706 00:27:43.706 Latency(us) 00:27:43.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.706 =================================================================================================================== 00:27:43.706 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:43.706 [2024-07-25 10:42:47.412966] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:43.706 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 2480067 00:27:43.706 [2024-07-25 10:42:47.413087] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:43.706 [2024-07-25 10:42:47.413190] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:43.706 [2024-07-25 10:42:47.413207] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1041060 name raid_bdev1, state offline 00:27:43.964 [2024-07-25 10:42:47.450868] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:44.222 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:27:44.222 00:27:44.222 real 0m28.628s 00:27:44.222 user 0m46.085s 00:27:44.222 sys 0m2.905s 00:27:44.222 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.222 10:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:44.222 ************************************ 00:27:44.222 END TEST raid_rebuild_test_sb_md_interleaved 00:27:44.222 ************************************ 00:27:44.222 10:42:47 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:27:44.222 10:42:47 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:27:44.223 10:42:47 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 2480067 ']' 00:27:44.223 10:42:47 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 2480067 00:27:44.223 10:42:47 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:27:44.223 00:27:44.223 real 18m21.135s 00:27:44.223 user 31m33.394s 00:27:44.223 sys 2m38.330s 00:27:44.223 10:42:47 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.223 10:42:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:44.223 ************************************ 00:27:44.223 END TEST bdev_raid 00:27:44.223 ************************************ 00:27:44.223 10:42:47 -- spdk/autotest.sh@195 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:27:44.223 10:42:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:44.223 10:42:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:44.223 10:42:47 -- common/autotest_common.sh@10 -- # set +x 00:27:44.223 ************************************ 00:27:44.223 START TEST bdevperf_config 00:27:44.223 ************************************ 00:27:44.223 10:42:47 bdevperf_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:27:44.223 * Looking for test storage... 00:27:44.223 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:44.223 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:44.223 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:44.223 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:44.223 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:44.223 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:44.223 10:42:47 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:47.527 10:42:50 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-25 10:42:47.943191] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:47.527 [2024-07-25 10:42:47.943261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483957 ] 00:27:47.527 Using job config with 4 jobs 00:27:47.527 [2024-07-25 10:42:48.027703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.527 [2024-07-25 10:42:48.151558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.527 cpumask for '\''job0'\'' is too big 00:27:47.527 cpumask for '\''job1'\'' is too big 00:27:47.527 cpumask for '\''job2'\'' is too big 00:27:47.527 cpumask for '\''job3'\'' is too big 00:27:47.527 Running I/O for 2 seconds... 00:27:47.527 00:27:47.527 Latency(us) 00:27:47.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.527 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.527 Malloc0 : 2.02 25590.78 24.99 0.00 0.00 9990.93 1504.90 13107.20 00:27:47.527 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.527 Malloc0 : 2.02 25568.72 24.97 0.00 0.00 9981.14 1504.90 11553.75 00:27:47.527 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.527 Malloc0 : 2.02 25546.87 24.95 0.00 0.00 9971.69 1498.83 11165.39 00:27:47.527 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.527 Malloc0 : 2.03 25525.07 24.93 0.00 0.00 9962.31 1498.83 11116.85 00:27:47.527 =================================================================================================================== 00:27:47.527 Total : 102231.44 99.84 0.00 0.00 9976.52 1498.83 13107.20' 00:27:47.527 10:42:50 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-25 10:42:47.943191] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:47.527 [2024-07-25 10:42:47.943261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483957 ] 00:27:47.527 Using job config with 4 jobs 00:27:47.527 [2024-07-25 10:42:48.027703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.527 [2024-07-25 10:42:48.151558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.527 cpumask for '\''job0'\'' is too big 00:27:47.527 cpumask for '\''job1'\'' is too big 00:27:47.527 cpumask for '\''job2'\'' is too big 00:27:47.527 cpumask for '\''job3'\'' is too big 00:27:47.527 Running I/O for 2 seconds... 00:27:47.527 00:27:47.527 Latency(us) 00:27:47.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.527 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.527 Malloc0 : 2.02 25590.78 24.99 0.00 0.00 9990.93 1504.90 13107.20 00:27:47.527 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.527 Malloc0 : 2.02 25568.72 24.97 0.00 0.00 9981.14 1504.90 11553.75 00:27:47.527 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.527 Malloc0 : 2.02 25546.87 24.95 0.00 0.00 9971.69 1498.83 11165.39 00:27:47.527 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.527 Malloc0 : 2.03 25525.07 24.93 0.00 0.00 9962.31 1498.83 11116.85 00:27:47.527 =================================================================================================================== 00:27:47.527 Total : 102231.44 99.84 0.00 0.00 9976.52 1498.83 13107.20' 00:27:47.527 10:42:50 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 10:42:47.943191] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:47.527 [2024-07-25 10:42:47.943261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483957 ] 00:27:47.527 Using job config with 4 jobs 00:27:47.527 [2024-07-25 10:42:48.027703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.527 [2024-07-25 10:42:48.151558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.527 cpumask for '\''job0'\'' is too big 00:27:47.527 cpumask for '\''job1'\'' is too big 00:27:47.527 cpumask for '\''job2'\'' is too big 00:27:47.527 cpumask for '\''job3'\'' is too big 00:27:47.527 Running I/O for 2 seconds... 00:27:47.527 00:27:47.527 Latency(us) 00:27:47.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.527 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.527 Malloc0 : 2.02 25590.78 24.99 0.00 0.00 9990.93 1504.90 13107.20 00:27:47.527 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.527 Malloc0 : 2.02 25568.72 24.97 0.00 0.00 9981.14 1504.90 11553.75 00:27:47.527 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.527 Malloc0 : 2.02 25546.87 24.95 0.00 0.00 9971.69 1498.83 11165.39 00:27:47.527 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.527 Malloc0 : 2.03 25525.07 24.93 0.00 0.00 9962.31 1498.83 11116.85 00:27:47.527 =================================================================================================================== 00:27:47.527 Total : 102231.44 99.84 0.00 0.00 9976.52 1498.83 13107.20' 00:27:47.527 10:42:50 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:47.527 10:42:50 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:47.527 10:42:50 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:27:47.527 10:42:50 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:47.527 [2024-07-25 10:42:50.721297] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:47.527 [2024-07-25 10:42:50.721392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484238 ] 00:27:47.527 [2024-07-25 10:42:50.823804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.527 [2024-07-25 10:42:50.961583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.527 cpumask for 'job0' is too big 00:27:47.527 cpumask for 'job1' is too big 00:27:47.527 cpumask for 'job2' is too big 00:27:47.527 cpumask for 'job3' is too big 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:27:50.056 Running I/O for 2 seconds... 00:27:50.056 00:27:50.056 Latency(us) 00:27:50.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.056 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:50.056 Malloc0 : 2.02 24738.61 24.16 0.00 0.00 10340.48 1832.58 15922.82 00:27:50.056 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:50.056 Malloc0 : 2.02 24717.22 24.14 0.00 0.00 10326.99 1808.31 14078.10 00:27:50.056 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:50.056 Malloc0 : 2.02 24696.07 24.12 0.00 0.00 10313.25 1796.17 12233.39 00:27:50.056 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:50.056 Malloc0 : 2.02 24675.02 24.10 0.00 0.00 10299.03 1832.58 10631.40 00:27:50.056 =================================================================================================================== 00:27:50.056 Total : 98826.92 96.51 0.00 0.00 10319.94 1796.17 15922.82' 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:50.056 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:50.056 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:50.056 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:50.056 10:42:53 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:53.333 10:42:56 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-25 10:42:53.570688] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:53.333 [2024-07-25 10:42:53.570771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484551 ] 00:27:53.333 Using job config with 3 jobs 00:27:53.333 [2024-07-25 10:42:53.665250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.333 [2024-07-25 10:42:53.802745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.333 cpumask for '\''job0'\'' is too big 00:27:53.333 cpumask for '\''job1'\'' is too big 00:27:53.333 cpumask for '\''job2'\'' is too big 00:27:53.333 Running I/O for 2 seconds... 00:27:53.333 00:27:53.333 Latency(us) 00:27:53.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.333 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:53.333 Malloc0 : 2.01 33737.18 32.95 0.00 0.00 7585.67 1771.90 11213.94 00:27:53.333 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:53.333 Malloc0 : 2.01 33707.81 32.92 0.00 0.00 7575.69 1771.90 9417.77 00:27:53.333 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:53.333 Malloc0 : 2.02 33762.72 32.97 0.00 0.00 7547.22 885.95 7815.77 00:27:53.333 =================================================================================================================== 00:27:53.333 Total : 101207.71 98.84 0.00 0.00 7569.50 885.95 11213.94' 00:27:53.333 10:42:56 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-25 10:42:53.570688] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:53.333 [2024-07-25 10:42:53.570771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484551 ] 00:27:53.333 Using job config with 3 jobs 00:27:53.333 [2024-07-25 10:42:53.665250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.333 [2024-07-25 10:42:53.802745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.333 cpumask for '\''job0'\'' is too big 00:27:53.333 cpumask for '\''job1'\'' is too big 00:27:53.333 cpumask for '\''job2'\'' is too big 00:27:53.333 Running I/O for 2 seconds... 00:27:53.333 00:27:53.333 Latency(us) 00:27:53.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.333 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:53.333 Malloc0 : 2.01 33737.18 32.95 0.00 0.00 7585.67 1771.90 11213.94 00:27:53.333 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:53.333 Malloc0 : 2.01 33707.81 32.92 0.00 0.00 7575.69 1771.90 9417.77 00:27:53.333 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:53.333 Malloc0 : 2.02 33762.72 32.97 0.00 0.00 7547.22 885.95 7815.77 00:27:53.333 =================================================================================================================== 00:27:53.333 Total : 101207.71 98.84 0.00 0.00 7569.50 885.95 11213.94' 00:27:53.333 10:42:56 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 10:42:53.570688] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:53.334 [2024-07-25 10:42:53.570771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484551 ] 00:27:53.334 Using job config with 3 jobs 00:27:53.334 [2024-07-25 10:42:53.665250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.334 [2024-07-25 10:42:53.802745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.334 cpumask for '\''job0'\'' is too big 00:27:53.334 cpumask for '\''job1'\'' is too big 00:27:53.334 cpumask for '\''job2'\'' is too big 00:27:53.334 Running I/O for 2 seconds... 00:27:53.334 00:27:53.334 Latency(us) 00:27:53.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.334 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:53.334 Malloc0 : 2.01 33737.18 32.95 0.00 0.00 7585.67 1771.90 11213.94 00:27:53.334 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:53.334 Malloc0 : 2.01 33707.81 32.92 0.00 0.00 7575.69 1771.90 9417.77 00:27:53.334 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:53.334 Malloc0 : 2.02 33762.72 32.97 0.00 0.00 7547.22 885.95 7815.77 00:27:53.334 =================================================================================================================== 00:27:53.334 Total : 101207.71 98.84 0.00 0.00 7569.50 885.95 11213.94' 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:53.334 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:53.334 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:53.334 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:53.334 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:53.334 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:53.334 10:42:56 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:55.865 10:42:59 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-25 10:42:56.399271] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:55.865 [2024-07-25 10:42:56.399360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484915 ] 00:27:55.865 Using job config with 4 jobs 00:27:55.865 [2024-07-25 10:42:56.509166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.865 [2024-07-25 10:42:56.649033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.865 cpumask for '\''job0'\'' is too big 00:27:55.865 cpumask for '\''job1'\'' is too big 00:27:55.865 cpumask for '\''job2'\'' is too big 00:27:55.865 cpumask for '\''job3'\'' is too big 00:27:55.865 Running I/O for 2 seconds... 00:27:55.865 00:27:55.865 Latency(us) 00:27:55.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.865 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.865 Malloc0 : 2.04 12309.55 12.02 0.00 0.00 20786.33 3835.07 32234.00 00:27:55.865 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.865 Malloc1 : 2.04 12298.76 12.01 0.00 0.00 20785.16 4563.25 32234.00 00:27:55.865 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.865 Malloc0 : 2.04 12288.44 12.00 0.00 0.00 20730.39 3786.52 28544.57 00:27:55.865 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.865 Malloc1 : 2.04 12277.80 11.99 0.00 0.00 20727.49 4538.97 28544.57 00:27:55.865 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.865 Malloc0 : 2.05 12267.44 11.98 0.00 0.00 20673.71 3762.25 24660.95 00:27:55.866 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc1 : 2.05 12256.89 11.97 0.00 0.00 20672.40 4538.97 24660.95 00:27:55.866 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc0 : 2.05 12246.56 11.96 0.00 0.00 20615.88 3762.25 21456.97 00:27:55.866 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc1 : 2.05 12236.00 11.95 0.00 0.00 20615.70 4538.97 21359.88 00:27:55.866 =================================================================================================================== 00:27:55.866 Total : 98181.44 95.88 0.00 0.00 20700.88 3762.25 32234.00' 00:27:55.866 10:42:59 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-25 10:42:56.399271] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:55.866 [2024-07-25 10:42:56.399360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484915 ] 00:27:55.866 Using job config with 4 jobs 00:27:55.866 [2024-07-25 10:42:56.509166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.866 [2024-07-25 10:42:56.649033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.866 cpumask for '\''job0'\'' is too big 00:27:55.866 cpumask for '\''job1'\'' is too big 00:27:55.866 cpumask for '\''job2'\'' is too big 00:27:55.866 cpumask for '\''job3'\'' is too big 00:27:55.866 Running I/O for 2 seconds... 00:27:55.866 00:27:55.866 Latency(us) 00:27:55.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.866 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc0 : 2.04 12309.55 12.02 0.00 0.00 20786.33 3835.07 32234.00 00:27:55.866 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc1 : 2.04 12298.76 12.01 0.00 0.00 20785.16 4563.25 32234.00 00:27:55.866 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc0 : 2.04 12288.44 12.00 0.00 0.00 20730.39 3786.52 28544.57 00:27:55.866 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc1 : 2.04 12277.80 11.99 0.00 0.00 20727.49 4538.97 28544.57 00:27:55.866 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc0 : 2.05 12267.44 11.98 0.00 0.00 20673.71 3762.25 24660.95 00:27:55.866 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc1 : 2.05 12256.89 11.97 0.00 0.00 20672.40 4538.97 24660.95 00:27:55.866 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc0 : 2.05 12246.56 11.96 0.00 0.00 20615.88 3762.25 21456.97 00:27:55.866 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc1 : 2.05 12236.00 11.95 0.00 0.00 20615.70 4538.97 21359.88 00:27:55.866 =================================================================================================================== 00:27:55.866 Total : 98181.44 95.88 0.00 0.00 20700.88 3762.25 32234.00' 00:27:55.866 10:42:59 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 10:42:56.399271] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:55.866 [2024-07-25 10:42:56.399360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484915 ] 00:27:55.866 Using job config with 4 jobs 00:27:55.866 [2024-07-25 10:42:56.509166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.866 [2024-07-25 10:42:56.649033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.866 cpumask for '\''job0'\'' is too big 00:27:55.866 cpumask for '\''job1'\'' is too big 00:27:55.866 cpumask for '\''job2'\'' is too big 00:27:55.866 cpumask for '\''job3'\'' is too big 00:27:55.866 Running I/O for 2 seconds... 00:27:55.866 00:27:55.866 Latency(us) 00:27:55.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.866 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc0 : 2.04 12309.55 12.02 0.00 0.00 20786.33 3835.07 32234.00 00:27:55.866 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc1 : 2.04 12298.76 12.01 0.00 0.00 20785.16 4563.25 32234.00 00:27:55.866 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc0 : 2.04 12288.44 12.00 0.00 0.00 20730.39 3786.52 28544.57 00:27:55.866 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc1 : 2.04 12277.80 11.99 0.00 0.00 20727.49 4538.97 28544.57 00:27:55.866 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc0 : 2.05 12267.44 11.98 0.00 0.00 20673.71 3762.25 24660.95 00:27:55.866 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc1 : 2.05 12256.89 11.97 0.00 0.00 20672.40 4538.97 24660.95 00:27:55.866 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc0 : 2.05 12246.56 11.96 0.00 0.00 20615.88 3762.25 21456.97 00:27:55.866 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.866 Malloc1 : 2.05 12236.00 11.95 0.00 0.00 20615.70 4538.97 21359.88 00:27:55.866 =================================================================================================================== 00:27:55.866 Total : 98181.44 95.88 0.00 0.00 20700.88 3762.25 32234.00' 00:27:55.866 10:42:59 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:55.866 10:42:59 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:55.866 10:42:59 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:27:55.866 10:42:59 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:27:55.866 10:42:59 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:55.866 10:42:59 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:55.866 00:27:55.866 real 0m11.406s 00:27:55.866 user 0m10.094s 00:27:55.866 sys 0m1.099s 00:27:55.866 10:42:59 bdevperf_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:55.866 10:42:59 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:27:55.866 ************************************ 00:27:55.866 END TEST bdevperf_config 00:27:55.866 ************************************ 00:27:55.866 10:42:59 -- spdk/autotest.sh@196 -- # uname -s 00:27:55.866 10:42:59 -- spdk/autotest.sh@196 -- # [[ Linux == Linux ]] 00:27:55.866 10:42:59 -- spdk/autotest.sh@197 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:55.866 10:42:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:55.866 10:42:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:55.866 10:42:59 -- common/autotest_common.sh@10 -- # set +x 00:27:55.866 ************************************ 00:27:55.866 START TEST reactor_set_interrupt 00:27:55.866 ************************************ 00:27:55.866 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:55.866 * Looking for test storage... 00:27:55.866 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:55.866 10:42:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:27:55.866 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:55.866 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:55.866 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:55.866 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:27:55.866 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:55.866 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:27:55.866 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:27:55.866 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:27:55.866 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:27:55.866 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:27:55.866 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:27:55.866 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:27:55.866 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:27:55.866 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:27:55.866 10:42:59 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:27:55.866 10:42:59 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:27:55.866 10:42:59 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:27:55.867 10:42:59 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:27:55.867 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:27:55.867 10:42:59 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:27:55.867 #define SPDK_CONFIG_H 00:27:55.867 #define SPDK_CONFIG_APPS 1 00:27:55.867 #define SPDK_CONFIG_ARCH native 00:27:55.867 #undef SPDK_CONFIG_ASAN 00:27:55.867 #undef SPDK_CONFIG_AVAHI 00:27:55.867 #undef SPDK_CONFIG_CET 00:27:55.867 #define SPDK_CONFIG_COVERAGE 1 00:27:55.867 #define SPDK_CONFIG_CROSS_PREFIX 00:27:55.867 #define SPDK_CONFIG_CRYPTO 1 00:27:55.867 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:27:55.867 #undef SPDK_CONFIG_CUSTOMOCF 00:27:55.867 #undef SPDK_CONFIG_DAOS 00:27:55.867 #define SPDK_CONFIG_DAOS_DIR 00:27:55.867 #define SPDK_CONFIG_DEBUG 1 00:27:55.867 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:27:55.867 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:27:55.867 #define SPDK_CONFIG_DPDK_INC_DIR 00:27:55.867 #define SPDK_CONFIG_DPDK_LIB_DIR 00:27:55.867 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:27:55.867 #undef SPDK_CONFIG_DPDK_UADK 00:27:55.867 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:27:55.867 #define SPDK_CONFIG_EXAMPLES 1 00:27:55.867 #undef SPDK_CONFIG_FC 00:27:55.867 #define SPDK_CONFIG_FC_PATH 00:27:55.867 #define SPDK_CONFIG_FIO_PLUGIN 1 00:27:55.867 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:27:55.867 #undef SPDK_CONFIG_FUSE 00:27:55.868 #undef SPDK_CONFIG_FUZZER 00:27:55.868 #define SPDK_CONFIG_FUZZER_LIB 00:27:55.868 #undef SPDK_CONFIG_GOLANG 00:27:55.868 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:27:55.868 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:27:55.868 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:27:55.868 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:27:55.868 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:27:55.868 #undef SPDK_CONFIG_HAVE_LIBBSD 00:27:55.868 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:27:55.868 #define SPDK_CONFIG_IDXD 1 00:27:55.868 #define SPDK_CONFIG_IDXD_KERNEL 1 00:27:55.868 #define SPDK_CONFIG_IPSEC_MB 1 00:27:55.868 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:27:55.868 #define SPDK_CONFIG_ISAL 1 00:27:55.868 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:27:55.868 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:27:55.868 #define SPDK_CONFIG_LIBDIR 00:27:55.868 #undef SPDK_CONFIG_LTO 00:27:55.868 #define SPDK_CONFIG_MAX_LCORES 128 00:27:55.868 #define SPDK_CONFIG_NVME_CUSE 1 00:27:55.868 #undef SPDK_CONFIG_OCF 00:27:55.868 #define SPDK_CONFIG_OCF_PATH 00:27:55.868 #define SPDK_CONFIG_OPENSSL_PATH 00:27:55.868 #undef SPDK_CONFIG_PGO_CAPTURE 00:27:55.868 #define SPDK_CONFIG_PGO_DIR 00:27:55.868 #undef SPDK_CONFIG_PGO_USE 00:27:55.868 #define SPDK_CONFIG_PREFIX /usr/local 00:27:55.868 #undef SPDK_CONFIG_RAID5F 00:27:55.868 #undef SPDK_CONFIG_RBD 00:27:55.868 #define SPDK_CONFIG_RDMA 1 00:27:55.868 #define SPDK_CONFIG_RDMA_PROV verbs 00:27:55.868 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:27:55.868 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:27:55.868 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:27:55.868 #define SPDK_CONFIG_SHARED 1 00:27:55.868 #undef SPDK_CONFIG_SMA 00:27:55.868 #define SPDK_CONFIG_TESTS 1 00:27:55.868 #undef SPDK_CONFIG_TSAN 00:27:55.868 #define SPDK_CONFIG_UBLK 1 00:27:55.868 #define SPDK_CONFIG_UBSAN 1 00:27:55.868 #undef SPDK_CONFIG_UNIT_TESTS 00:27:55.868 #undef SPDK_CONFIG_URING 00:27:55.868 #define SPDK_CONFIG_URING_PATH 00:27:55.868 #undef SPDK_CONFIG_URING_ZNS 00:27:55.868 #undef SPDK_CONFIG_USDT 00:27:55.868 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:27:55.868 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:27:55.868 #undef SPDK_CONFIG_VFIO_USER 00:27:55.868 #define SPDK_CONFIG_VFIO_USER_DIR 00:27:55.868 #define SPDK_CONFIG_VHOST 1 00:27:55.868 #define SPDK_CONFIG_VIRTIO 1 00:27:55.868 #undef SPDK_CONFIG_VTUNE 00:27:55.868 #define SPDK_CONFIG_VTUNE_DIR 00:27:55.868 #define SPDK_CONFIG_WERROR 1 00:27:55.868 #define SPDK_CONFIG_WPDK_DIR 00:27:55.868 #undef SPDK_CONFIG_XNVME 00:27:55.868 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:27:55.868 10:42:59 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:27:55.868 10:42:59 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.868 10:42:59 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.868 10:42:59 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.868 10:42:59 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.868 10:42:59 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.868 10:42:59 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.868 10:42:59 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:27:55.868 10:42:59 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:27:55.868 10:42:59 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:27:55.868 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 1 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:55.869 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@202 -- # cat 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export valgrind= 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@265 -- # valgrind= 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@271 -- # uname -s 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@281 -- # MAKE=make 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@301 -- # TEST_MODE= 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@320 -- # [[ -z 2485306 ]] 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@320 -- # kill -0 2485306 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local mount target_dir 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.h2UFkd 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.h2UFkd/tests/interrupt /tmp/spdk.h2UFkd 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@329 -- # df -T 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=952066048 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4332363776 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=51001233408 00:27:55.870 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994708992 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=10993475584 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30992642048 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997352448 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4710400 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=12389982208 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=8962048 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30995980288 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=1376256 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:27:55.871 * Looking for test storage... 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@370 -- # local target_space new_size 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mount=/ 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@376 -- # target_space=51001233408 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@383 -- # new_size=13208068096 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:55.871 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@391 -- # return 0 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2485382 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:55.871 10:42:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2485382 /var/tmp/spdk.sock 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 2485382 ']' 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:55.871 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:55.871 [2024-07-25 10:42:59.471053] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:55.871 [2024-07-25 10:42:59.471136] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485382 ] 00:27:55.871 [2024-07-25 10:42:59.546964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:56.129 [2024-07-25 10:42:59.659183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.129 [2024-07-25 10:42:59.659245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.129 [2024-07-25 10:42:59.659249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.129 [2024-07-25 10:42:59.757624] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:56.129 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:56.129 10:42:59 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:27:56.129 10:42:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:27:56.129 10:42:59 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:56.387 Malloc0 00:27:56.387 Malloc1 00:27:56.387 Malloc2 00:27:56.387 10:43:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:27:56.387 10:43:00 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:27:56.387 10:43:00 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:27:56.387 10:43:00 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:27:56.387 5000+0 records in 00:27:56.387 5000+0 records out 00:27:56.387 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0141752 s, 722 MB/s 00:27:56.387 10:43:00 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:27:56.644 AIO0 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 2485382 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 2485382 without_thd 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2485382 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:27:56.902 10:43:00 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:56.903 10:43:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:27:56.903 10:43:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:27:57.160 spdk_thread ids are 1 on reactor0. 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2485382 0 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2485382 0 idle 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485382 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485382 -w 256 00:27:57.160 10:43:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485382 root 20 0 128.2g 38400 24960 S 0.0 0.1 0:00.35 reactor_0' 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485382 root 20 0 128.2g 38400 24960 S 0.0 0.1 0:00.35 reactor_0 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2485382 1 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2485382 1 idle 00:27:57.418 10:43:01 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485382 00:27:57.419 10:43:01 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:27:57.419 10:43:01 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:57.419 10:43:01 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:27:57.419 10:43:01 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:27:57.419 10:43:01 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:27:57.419 10:43:01 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:27:57.419 10:43:01 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:27:57.419 10:43:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485382 -w 256 00:27:57.419 10:43:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485386 root 20 0 128.2g 38400 24960 S 0.0 0.1 0:00.00 reactor_1' 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485386 root 20 0 128.2g 38400 24960 S 0.0 0.1 0:00.00 reactor_1 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2485382 2 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2485382 2 idle 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485382 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485382 -w 256 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485387 root 20 0 128.2g 38400 24960 S 0.0 0.1 0:00.00 reactor_2' 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485387 root 20 0 128.2g 38400 24960 S 0.0 0.1 0:00.00 reactor_2 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:27:57.677 10:43:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:27:57.935 [2024-07-25 10:43:01.628140] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:58.192 10:43:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:27:58.449 [2024-07-25 10:43:01.923926] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:27:58.449 [2024-07-25 10:43:01.924194] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:58.449 10:43:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:27:58.706 [2024-07-25 10:43:02.211886] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:27:58.706 [2024-07-25 10:43:02.212074] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2485382 0 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2485382 0 busy 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485382 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485382 -w 256 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485382 root 20 0 128.2g 38400 24960 R 99.9 0.1 0:00.83 reactor_0' 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485382 root 20 0 128.2g 38400 24960 R 99.9 0.1 0:00.83 reactor_0 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2485382 2 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2485382 2 busy 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485382 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485382 -w 256 00:27:58.706 10:43:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:27:58.963 10:43:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485387 root 20 0 128.2g 38400 24960 R 99.9 0.1 0:00.33 reactor_2' 00:27:58.963 10:43:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485387 root 20 0 128.2g 38400 24960 R 99.9 0.1 0:00.33 reactor_2 00:27:58.963 10:43:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:27:58.963 10:43:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:27:58.963 10:43:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:27:58.963 10:43:02 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:27:58.963 10:43:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:27:58.963 10:43:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:27:58.963 10:43:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:27:58.963 10:43:02 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:27:58.963 10:43:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:27:59.221 [2024-07-25 10:43:02.795873] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:27:59.221 [2024-07-25 10:43:02.796008] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:59.221 10:43:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:27:59.221 10:43:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2485382 2 00:27:59.221 10:43:02 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2485382 2 idle 00:27:59.221 10:43:02 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485382 00:27:59.221 10:43:02 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:27:59.221 10:43:02 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:59.221 10:43:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:27:59.221 10:43:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:27:59.221 10:43:02 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:27:59.221 10:43:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:27:59.221 10:43:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:27:59.221 10:43:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485382 -w 256 00:27:59.221 10:43:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:27:59.478 10:43:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485387 root 20 0 128.2g 38400 24960 S 0.0 0.1 0:00.58 reactor_2' 00:27:59.478 10:43:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485387 root 20 0 128.2g 38400 24960 S 0.0 0.1 0:00.58 reactor_2 00:27:59.478 10:43:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:27:59.478 10:43:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:27:59.478 10:43:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:27:59.478 10:43:02 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:27:59.478 10:43:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:27:59.478 10:43:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:27:59.478 10:43:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:27:59.478 10:43:02 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:27:59.478 10:43:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:27:59.745 [2024-07-25 10:43:03.203865] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:27:59.745 [2024-07-25 10:43:03.204023] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:59.745 10:43:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:27:59.745 10:43:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:27:59.745 10:43:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:27:59.745 [2024-07-25 10:43:03.448042] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:00.002 10:43:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2485382 0 00:28:00.002 10:43:03 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2485382 0 idle 00:28:00.002 10:43:03 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485382 00:28:00.002 10:43:03 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:00.002 10:43:03 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:00.002 10:43:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:00.002 10:43:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:00.002 10:43:03 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:00.002 10:43:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:00.002 10:43:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:00.002 10:43:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485382 -w 256 00:28:00.002 10:43:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485382 root 20 0 128.2g 38400 24960 S 0.0 0.1 0:01.64 reactor_0' 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485382 root 20 0 128.2g 38400 24960 S 0.0 0.1 0:01.64 reactor_0 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:28:00.003 10:43:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 2485382 00:28:00.003 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 2485382 ']' 00:28:00.003 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 2485382 00:28:00.003 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:28:00.003 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:00.003 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2485382 00:28:00.003 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:00.003 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:00.003 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2485382' 00:28:00.003 killing process with pid 2485382 00:28:00.003 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 2485382 00:28:00.003 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 2485382 00:28:00.568 10:43:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:28:00.568 10:43:03 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:28:00.568 10:43:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:28:00.568 10:43:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.568 10:43:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:28:00.568 10:43:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2485977 00:28:00.568 10:43:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:28:00.568 10:43:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:00.568 10:43:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2485977 /var/tmp/spdk.sock 00:28:00.568 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 2485977 ']' 00:28:00.568 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.568 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:00.568 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.568 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:00.568 10:43:03 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:00.568 [2024-07-25 10:43:04.010125] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:00.568 [2024-07-25 10:43:04.010201] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485977 ] 00:28:00.568 [2024-07-25 10:43:04.095459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:00.568 [2024-07-25 10:43:04.220821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.568 [2024-07-25 10:43:04.224123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.568 [2024-07-25 10:43:04.224134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.825 [2024-07-25 10:43:04.318929] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:00.825 10:43:04 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.825 10:43:04 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:28:00.825 10:43:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:28:00.825 10:43:04 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:01.083 Malloc0 00:28:01.083 Malloc1 00:28:01.083 Malloc2 00:28:01.083 10:43:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:28:01.083 10:43:04 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:28:01.083 10:43:04 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:28:01.083 10:43:04 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:28:01.083 5000+0 records in 00:28:01.083 5000+0 records out 00:28:01.083 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0140026 s, 731 MB/s 00:28:01.083 10:43:04 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:28:01.356 AIO0 00:28:01.356 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 2485977 00:28:01.356 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 2485977 00:28:01.356 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2485977 00:28:01.356 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:28:01.356 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:28:01.356 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:28:01.356 10:43:05 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:28:01.356 10:43:05 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:28:01.356 10:43:05 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:28:01.356 10:43:05 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:01.356 10:43:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:28:01.356 10:43:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:01.615 10:43:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:28:01.615 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:28:01.615 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:28:01.615 10:43:05 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:28:01.615 10:43:05 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:28:01.615 10:43:05 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:28:01.615 10:43:05 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:01.615 10:43:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:28:01.615 10:43:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:28:01.874 spdk_thread ids are 1 on reactor0. 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2485977 0 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2485977 0 idle 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485977 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485977 -w 256 00:28:01.874 10:43:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:02.132 10:43:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485977 root 20 0 128.2g 38016 24576 S 0.0 0.1 0:00.37 reactor_0' 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485977 root 20 0 128.2g 38016 24576 S 0.0 0.1 0:00.37 reactor_0 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2485977 1 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2485977 1 idle 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485977 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485977 -w 256 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:28:02.133 10:43:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485988 root 20 0 128.2g 38016 24576 S 0.0 0.1 0:00.00 reactor_1' 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485988 root 20 0 128.2g 38016 24576 S 0.0 0.1 0:00.00 reactor_1 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2485977 2 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2485977 2 idle 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485977 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485977 -w 256 00:28:02.391 10:43:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:02.391 10:43:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485989 root 20 0 128.2g 38016 24576 S 0.0 0.1 0:00.00 reactor_2' 00:28:02.391 10:43:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485989 root 20 0 128.2g 38016 24576 S 0.0 0.1 0:00.00 reactor_2 00:28:02.391 10:43:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:02.391 10:43:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:02.391 10:43:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:02.391 10:43:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:02.391 10:43:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:02.391 10:43:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:02.391 10:43:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:02.391 10:43:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:02.391 10:43:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:28:02.391 10:43:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:28:02.649 [2024-07-25 10:43:06.268717] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:28:02.649 [2024-07-25 10:43:06.268864] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:28:02.649 [2024-07-25 10:43:06.268980] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:02.649 10:43:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:28:02.906 [2024-07-25 10:43:06.509295] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:28:02.907 [2024-07-25 10:43:06.509490] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:02.907 10:43:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:02.907 10:43:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2485977 0 00:28:02.907 10:43:06 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2485977 0 busy 00:28:02.907 10:43:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485977 00:28:02.907 10:43:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:02.907 10:43:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:28:02.907 10:43:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:28:02.907 10:43:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:02.907 10:43:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:02.907 10:43:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:02.907 10:43:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485977 -w 256 00:28:02.907 10:43:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485977 root 20 0 128.2g 38016 24576 R 99.9 0.1 0:00.79 reactor_0' 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485977 root 20 0 128.2g 38016 24576 R 99.9 0.1 0:00.79 reactor_0 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2485977 2 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2485977 2 busy 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485977 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485977 -w 256 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485989 root 20 0 128.2g 38016 24576 R 99.9 0.1 0:00.33 reactor_2' 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485989 root 20 0 128.2g 38016 24576 R 99.9 0.1 0:00.33 reactor_2 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:03.164 10:43:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:28:03.422 [2024-07-25 10:43:07.078856] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:28:03.422 [2024-07-25 10:43:07.079006] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:03.422 10:43:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:28:03.422 10:43:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2485977 2 00:28:03.422 10:43:07 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2485977 2 idle 00:28:03.422 10:43:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485977 00:28:03.422 10:43:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:03.422 10:43:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:03.422 10:43:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:03.422 10:43:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:03.422 10:43:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:03.422 10:43:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:03.422 10:43:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:03.422 10:43:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485977 -w 256 00:28:03.422 10:43:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:03.680 10:43:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485989 root 20 0 128.2g 38016 24576 S 0.0 0.1 0:00.56 reactor_2' 00:28:03.680 10:43:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485989 root 20 0 128.2g 38016 24576 S 0.0 0.1 0:00.56 reactor_2 00:28:03.680 10:43:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:03.680 10:43:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:03.680 10:43:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:03.680 10:43:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:03.680 10:43:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:03.680 10:43:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:03.680 10:43:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:03.680 10:43:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:03.680 10:43:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:28:03.938 [2024-07-25 10:43:07.544089] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:28:03.938 [2024-07-25 10:43:07.544316] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:28:03.938 [2024-07-25 10:43:07.544356] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:03.938 10:43:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:28:03.938 10:43:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2485977 0 00:28:03.938 10:43:07 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2485977 0 idle 00:28:03.938 10:43:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2485977 00:28:03.938 10:43:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:03.938 10:43:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:03.938 10:43:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:03.938 10:43:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:03.938 10:43:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:03.938 10:43:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:03.938 10:43:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:03.938 10:43:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2485977 -w 256 00:28:03.938 10:43:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2485977 root 20 0 128.2g 38016 24576 S 0.0 0.1 0:01.64 reactor_0' 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2485977 root 20 0 128.2g 38016 24576 S 0.0 0.1 0:01.64 reactor_0 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:04.196 10:43:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 2485977 00:28:04.196 10:43:07 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 2485977 ']' 00:28:04.196 10:43:07 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 2485977 00:28:04.196 10:43:07 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:28:04.196 10:43:07 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:04.196 10:43:07 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2485977 00:28:04.196 10:43:07 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:04.196 10:43:07 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:04.196 10:43:07 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2485977' 00:28:04.196 killing process with pid 2485977 00:28:04.196 10:43:07 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 2485977 00:28:04.196 10:43:07 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 2485977 00:28:04.454 10:43:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:28:04.454 10:43:08 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:28:04.454 00:28:04.454 real 0m8.795s 00:28:04.454 user 0m9.405s 00:28:04.454 sys 0m1.566s 00:28:04.454 10:43:08 reactor_set_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:04.454 10:43:08 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:04.454 ************************************ 00:28:04.454 END TEST reactor_set_interrupt 00:28:04.454 ************************************ 00:28:04.454 10:43:08 -- spdk/autotest.sh@198 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:28:04.454 10:43:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:04.454 10:43:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:04.454 10:43:08 -- common/autotest_common.sh@10 -- # set +x 00:28:04.454 ************************************ 00:28:04.454 START TEST reap_unregistered_poller 00:28:04.454 ************************************ 00:28:04.454 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:28:04.715 * Looking for test storage... 00:28:04.715 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.715 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:28:04.715 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:28:04.715 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.715 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.715 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:28:04.715 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:04.715 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:28:04.715 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:28:04.715 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:28:04.715 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:28:04.715 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:28:04.715 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:28:04.715 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:28:04.715 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:28:04.715 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:28:04.715 10:43:08 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:28:04.716 10:43:08 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:28:04.716 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:28:04.716 10:43:08 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:28:04.716 #define SPDK_CONFIG_H 00:28:04.716 #define SPDK_CONFIG_APPS 1 00:28:04.716 #define SPDK_CONFIG_ARCH native 00:28:04.716 #undef SPDK_CONFIG_ASAN 00:28:04.716 #undef SPDK_CONFIG_AVAHI 00:28:04.716 #undef SPDK_CONFIG_CET 00:28:04.716 #define SPDK_CONFIG_COVERAGE 1 00:28:04.716 #define SPDK_CONFIG_CROSS_PREFIX 00:28:04.716 #define SPDK_CONFIG_CRYPTO 1 00:28:04.716 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:28:04.716 #undef SPDK_CONFIG_CUSTOMOCF 00:28:04.716 #undef SPDK_CONFIG_DAOS 00:28:04.716 #define SPDK_CONFIG_DAOS_DIR 00:28:04.716 #define SPDK_CONFIG_DEBUG 1 00:28:04.716 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:28:04.716 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:28:04.716 #define SPDK_CONFIG_DPDK_INC_DIR 00:28:04.716 #define SPDK_CONFIG_DPDK_LIB_DIR 00:28:04.716 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:28:04.716 #undef SPDK_CONFIG_DPDK_UADK 00:28:04.716 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:28:04.716 #define SPDK_CONFIG_EXAMPLES 1 00:28:04.716 #undef SPDK_CONFIG_FC 00:28:04.716 #define SPDK_CONFIG_FC_PATH 00:28:04.716 #define SPDK_CONFIG_FIO_PLUGIN 1 00:28:04.716 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:28:04.716 #undef SPDK_CONFIG_FUSE 00:28:04.716 #undef SPDK_CONFIG_FUZZER 00:28:04.716 #define SPDK_CONFIG_FUZZER_LIB 00:28:04.716 #undef SPDK_CONFIG_GOLANG 00:28:04.716 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:28:04.716 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:28:04.716 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:28:04.716 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:28:04.716 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:28:04.716 #undef SPDK_CONFIG_HAVE_LIBBSD 00:28:04.716 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:28:04.716 #define SPDK_CONFIG_IDXD 1 00:28:04.716 #define SPDK_CONFIG_IDXD_KERNEL 1 00:28:04.716 #define SPDK_CONFIG_IPSEC_MB 1 00:28:04.716 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:28:04.716 #define SPDK_CONFIG_ISAL 1 00:28:04.716 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:28:04.716 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:28:04.716 #define SPDK_CONFIG_LIBDIR 00:28:04.716 #undef SPDK_CONFIG_LTO 00:28:04.716 #define SPDK_CONFIG_MAX_LCORES 128 00:28:04.716 #define SPDK_CONFIG_NVME_CUSE 1 00:28:04.716 #undef SPDK_CONFIG_OCF 00:28:04.716 #define SPDK_CONFIG_OCF_PATH 00:28:04.716 #define SPDK_CONFIG_OPENSSL_PATH 00:28:04.716 #undef SPDK_CONFIG_PGO_CAPTURE 00:28:04.716 #define SPDK_CONFIG_PGO_DIR 00:28:04.716 #undef SPDK_CONFIG_PGO_USE 00:28:04.716 #define SPDK_CONFIG_PREFIX /usr/local 00:28:04.716 #undef SPDK_CONFIG_RAID5F 00:28:04.716 #undef SPDK_CONFIG_RBD 00:28:04.716 #define SPDK_CONFIG_RDMA 1 00:28:04.716 #define SPDK_CONFIG_RDMA_PROV verbs 00:28:04.716 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:28:04.716 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:28:04.716 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:28:04.716 #define SPDK_CONFIG_SHARED 1 00:28:04.716 #undef SPDK_CONFIG_SMA 00:28:04.716 #define SPDK_CONFIG_TESTS 1 00:28:04.716 #undef SPDK_CONFIG_TSAN 00:28:04.716 #define SPDK_CONFIG_UBLK 1 00:28:04.716 #define SPDK_CONFIG_UBSAN 1 00:28:04.716 #undef SPDK_CONFIG_UNIT_TESTS 00:28:04.716 #undef SPDK_CONFIG_URING 00:28:04.716 #define SPDK_CONFIG_URING_PATH 00:28:04.716 #undef SPDK_CONFIG_URING_ZNS 00:28:04.716 #undef SPDK_CONFIG_USDT 00:28:04.716 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:28:04.716 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:28:04.716 #undef SPDK_CONFIG_VFIO_USER 00:28:04.716 #define SPDK_CONFIG_VFIO_USER_DIR 00:28:04.716 #define SPDK_CONFIG_VHOST 1 00:28:04.716 #define SPDK_CONFIG_VIRTIO 1 00:28:04.716 #undef SPDK_CONFIG_VTUNE 00:28:04.716 #define SPDK_CONFIG_VTUNE_DIR 00:28:04.716 #define SPDK_CONFIG_WERROR 1 00:28:04.716 #define SPDK_CONFIG_WPDK_DIR 00:28:04.716 #undef SPDK_CONFIG_XNVME 00:28:04.716 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:28:04.717 10:43:08 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:28:04.717 10:43:08 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.717 10:43:08 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.717 10:43:08 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.717 10:43:08 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.717 10:43:08 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.717 10:43:08 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.717 10:43:08 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:28:04.717 10:43:08 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:28:04.717 10:43:08 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:28:04.717 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 1 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@202 -- # cat 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export valgrind= 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@265 -- # valgrind= 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@271 -- # uname -s 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:28:04.718 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@281 -- # MAKE=make 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@301 -- # TEST_MODE= 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@320 -- # [[ -z 2486498 ]] 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@320 -- # kill -0 2486498 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local mount target_dir 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.85ECwP 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.85ECwP/tests/interrupt /tmp/spdk.85ECwP 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@329 -- # df -T 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=952066048 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4332363776 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=51001094144 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994708992 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=10993614848 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30992642048 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997352448 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4710400 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=12389982208 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=8962048 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30995980288 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=1376256 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:28:04.719 * Looking for test storage... 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@370 -- # local target_space new_size 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mount=/ 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@376 -- # target_space=51001094144 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@383 -- # new_size=13208207360 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.719 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@391 -- # return 0 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:28:04.719 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:28:04.719 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:28:04.719 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:04.719 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:28:04.719 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:28:04.720 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:28:04.720 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:28:04.720 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:28:04.720 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:28:04.720 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:28:04.720 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:28:04.720 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.720 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:28:04.720 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2486539 00:28:04.720 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:28:04.720 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:04.720 10:43:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2486539 /var/tmp/spdk.sock 00:28:04.720 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@831 -- # '[' -z 2486539 ']' 00:28:04.720 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.720 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:04.720 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.720 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:04.720 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:28:04.720 [2024-07-25 10:43:08.309253] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:04.720 [2024-07-25 10:43:08.309318] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486539 ] 00:28:04.720 [2024-07-25 10:43:08.387198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:04.977 [2024-07-25 10:43:08.504494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.977 [2024-07-25 10:43:08.504523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.977 [2024-07-25 10:43:08.504526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.977 [2024-07-25 10:43:08.589613] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:04.977 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:04.977 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@864 -- # return 0 00:28:04.977 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:28:04.977 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.977 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:28:04.977 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:28:04.977 10:43:08 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.977 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:28:04.977 "name": "app_thread", 00:28:04.977 "id": 1, 00:28:04.977 "active_pollers": [], 00:28:04.977 "timed_pollers": [ 00:28:04.977 { 00:28:04.977 "name": "rpc_subsystem_poll_servers", 00:28:04.977 "id": 1, 00:28:04.977 "state": "waiting", 00:28:04.977 "run_count": 0, 00:28:04.977 "busy_count": 0, 00:28:04.977 "period_ticks": 10800000 00:28:04.977 } 00:28:04.977 ], 00:28:04.977 "paused_pollers": [] 00:28:04.977 }' 00:28:04.977 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:28:04.977 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:28:04.977 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:28:05.233 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:28:05.233 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:28:05.233 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:28:05.233 10:43:08 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:28:05.233 10:43:08 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:28:05.233 10:43:08 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:28:05.233 5000+0 records in 00:28:05.233 5000+0 records out 00:28:05.233 10240000 bytes (10 MB, 9.8 MiB) copied, 0.014036 s, 730 MB/s 00:28:05.233 10:43:08 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:28:05.490 AIO0 00:28:05.490 10:43:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:05.748 10:43:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:28:05.748 10:43:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:28:05.748 10:43:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:28:05.748 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.748 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:28:05.748 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.748 10:43:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:28:05.748 "name": "app_thread", 00:28:05.748 "id": 1, 00:28:05.748 "active_pollers": [], 00:28:05.748 "timed_pollers": [ 00:28:05.748 { 00:28:05.748 "name": "rpc_subsystem_poll_servers", 00:28:05.748 "id": 1, 00:28:05.748 "state": "waiting", 00:28:05.748 "run_count": 0, 00:28:05.748 "busy_count": 0, 00:28:05.748 "period_ticks": 10800000 00:28:05.748 } 00:28:05.748 ], 00:28:05.748 "paused_pollers": [] 00:28:05.748 }' 00:28:05.748 10:43:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:28:06.005 10:43:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:28:06.005 10:43:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:28:06.005 10:43:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:28:06.005 10:43:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:28:06.005 10:43:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:28:06.005 10:43:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:28:06.005 10:43:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 2486539 00:28:06.005 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@950 -- # '[' -z 2486539 ']' 00:28:06.005 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@954 -- # kill -0 2486539 00:28:06.005 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@955 -- # uname 00:28:06.005 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:06.005 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2486539 00:28:06.005 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:06.005 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:06.005 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2486539' 00:28:06.005 killing process with pid 2486539 00:28:06.005 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@969 -- # kill 2486539 00:28:06.005 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@974 -- # wait 2486539 00:28:06.263 10:43:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:28:06.263 10:43:09 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:28:06.263 00:28:06.263 real 0m1.696s 00:28:06.263 user 0m1.385s 00:28:06.263 sys 0m0.429s 00:28:06.263 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.263 10:43:09 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:28:06.263 ************************************ 00:28:06.263 END TEST reap_unregistered_poller 00:28:06.263 ************************************ 00:28:06.263 10:43:09 -- spdk/autotest.sh@202 -- # uname -s 00:28:06.263 10:43:09 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:28:06.263 10:43:09 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:28:06.263 10:43:09 -- spdk/autotest.sh@209 -- # [[ 1 -eq 0 ]] 00:28:06.263 10:43:09 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@264 -- # timing_exit lib 00:28:06.263 10:43:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:06.263 10:43:09 -- common/autotest_common.sh@10 -- # set +x 00:28:06.263 10:43:09 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@351 -- # '[' 1 -eq 1 ']' 00:28:06.263 10:43:09 -- spdk/autotest.sh@352 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:28:06.263 10:43:09 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:06.263 10:43:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:06.263 10:43:09 -- common/autotest_common.sh@10 -- # set +x 00:28:06.263 ************************************ 00:28:06.263 START TEST compress_compdev 00:28:06.263 ************************************ 00:28:06.263 10:43:09 compress_compdev -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:28:06.263 * Looking for test storage... 00:28:06.263 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:28:06.263 10:43:09 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.263 10:43:09 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:28:06.263 10:43:09 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.263 10:43:09 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.263 10:43:09 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.263 10:43:09 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.264 10:43:09 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.264 10:43:09 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.264 10:43:09 compress_compdev -- paths/export.sh@5 -- # export PATH 00:28:06.264 10:43:09 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.264 10:43:09 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:28:06.264 10:43:09 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:06.264 10:43:09 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:06.264 10:43:09 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.264 10:43:09 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.264 10:43:09 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.264 10:43:09 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:06.264 10:43:09 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:06.264 10:43:09 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:06.537 10:43:09 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:06.537 10:43:09 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:28:06.537 10:43:09 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:28:06.537 10:43:09 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:28:06.537 10:43:09 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:28:06.537 10:43:09 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2486883 00:28:06.537 10:43:09 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:28:06.537 10:43:09 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:06.537 10:43:09 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2486883 00:28:06.537 10:43:09 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 2486883 ']' 00:28:06.537 10:43:09 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.537 10:43:09 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.537 10:43:09 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.538 10:43:09 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.538 10:43:09 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:28:06.538 [2024-07-25 10:43:10.027738] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:06.538 [2024-07-25 10:43:10.027814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486883 ] 00:28:06.538 [2024-07-25 10:43:10.105571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:06.538 [2024-07-25 10:43:10.219009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.538 [2024-07-25 10:43:10.219013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.126 [2024-07-25 10:43:10.831634] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:28:07.383 10:43:10 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:07.383 10:43:10 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:28:07.383 10:43:10 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:28:07.383 10:43:10 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:07.383 10:43:10 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:10.657 [2024-07-25 10:43:14.079843] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x15e3120 PMD being used: compress_qat 00:28:10.657 10:43:14 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:10.657 10:43:14 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:28:10.657 10:43:14 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:10.657 10:43:14 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:10.657 10:43:14 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:10.657 10:43:14 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:10.657 10:43:14 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:10.986 10:43:14 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:10.986 [ 00:28:10.986 { 00:28:10.986 "name": "Nvme0n1", 00:28:10.986 "aliases": [ 00:28:10.986 "bd7b2c5c-a4e9-410d-aafc-f9dc064107bb" 00:28:10.986 ], 00:28:10.986 "product_name": "NVMe disk", 00:28:10.986 "block_size": 512, 00:28:10.986 "num_blocks": 1953525168, 00:28:10.986 "uuid": "bd7b2c5c-a4e9-410d-aafc-f9dc064107bb", 00:28:10.986 "assigned_rate_limits": { 00:28:10.986 "rw_ios_per_sec": 0, 00:28:10.986 "rw_mbytes_per_sec": 0, 00:28:10.986 "r_mbytes_per_sec": 0, 00:28:10.986 "w_mbytes_per_sec": 0 00:28:10.986 }, 00:28:10.986 "claimed": false, 00:28:10.986 "zoned": false, 00:28:10.986 "supported_io_types": { 00:28:10.986 "read": true, 00:28:10.986 "write": true, 00:28:10.986 "unmap": true, 00:28:10.986 "flush": true, 00:28:10.986 "reset": true, 00:28:10.986 "nvme_admin": true, 00:28:10.986 "nvme_io": true, 00:28:10.986 "nvme_io_md": false, 00:28:10.986 "write_zeroes": true, 00:28:10.986 "zcopy": false, 00:28:10.986 "get_zone_info": false, 00:28:10.986 "zone_management": false, 00:28:10.986 "zone_append": false, 00:28:10.986 "compare": false, 00:28:10.986 "compare_and_write": false, 00:28:10.986 "abort": true, 00:28:10.986 "seek_hole": false, 00:28:10.986 "seek_data": false, 00:28:10.986 "copy": false, 00:28:10.986 "nvme_iov_md": false 00:28:10.986 }, 00:28:10.986 "driver_specific": { 00:28:10.986 "nvme": [ 00:28:10.986 { 00:28:10.986 "pci_address": "0000:0b:00.0", 00:28:10.986 "trid": { 00:28:10.986 "trtype": "PCIe", 00:28:10.986 "traddr": "0000:0b:00.0" 00:28:10.986 }, 00:28:10.986 "ctrlr_data": { 00:28:10.986 "cntlid": 0, 00:28:10.986 "vendor_id": "0x8086", 00:28:10.986 "model_number": "INTEL SSDPE2KX010T8", 00:28:10.986 "serial_number": "BTLJ72430F4Q1P0FGN", 00:28:10.986 "firmware_revision": "VDV10184", 00:28:10.986 "oacs": { 00:28:10.986 "security": 1, 00:28:10.986 "format": 1, 00:28:10.986 "firmware": 1, 00:28:10.986 "ns_manage": 1 00:28:10.986 }, 00:28:10.986 "multi_ctrlr": false, 00:28:10.986 "ana_reporting": false 00:28:10.986 }, 00:28:10.986 "vs": { 00:28:10.986 "nvme_version": "1.2" 00:28:10.986 }, 00:28:10.986 "ns_data": { 00:28:10.986 "id": 1, 00:28:10.986 "can_share": false 00:28:10.986 }, 00:28:10.986 "security": { 00:28:10.986 "opal": true 00:28:10.986 } 00:28:10.986 } 00:28:10.986 ], 00:28:10.986 "mp_policy": "active_passive" 00:28:10.986 } 00:28:10.986 } 00:28:10.986 ] 00:28:10.986 10:43:14 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:10.986 10:43:14 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:11.243 [2024-07-25 10:43:14.840834] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x15e55c0 PMD being used: compress_qat 00:28:12.174 10766fc2-0077-4695-a36d-0d1cf9b9398b 00:28:12.174 10:43:15 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:12.432 8c9dfca4-7ca2-4e21-8a54-8562497bbd0a 00:28:12.432 10:43:15 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:12.432 10:43:15 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:28:12.432 10:43:15 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:12.432 10:43:15 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:12.432 10:43:15 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:12.432 10:43:15 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:12.432 10:43:15 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:12.688 10:43:16 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:12.945 [ 00:28:12.945 { 00:28:12.945 "name": "8c9dfca4-7ca2-4e21-8a54-8562497bbd0a", 00:28:12.945 "aliases": [ 00:28:12.945 "lvs0/lv0" 00:28:12.945 ], 00:28:12.945 "product_name": "Logical Volume", 00:28:12.945 "block_size": 512, 00:28:12.945 "num_blocks": 204800, 00:28:12.945 "uuid": "8c9dfca4-7ca2-4e21-8a54-8562497bbd0a", 00:28:12.945 "assigned_rate_limits": { 00:28:12.945 "rw_ios_per_sec": 0, 00:28:12.945 "rw_mbytes_per_sec": 0, 00:28:12.945 "r_mbytes_per_sec": 0, 00:28:12.945 "w_mbytes_per_sec": 0 00:28:12.945 }, 00:28:12.945 "claimed": false, 00:28:12.945 "zoned": false, 00:28:12.945 "supported_io_types": { 00:28:12.945 "read": true, 00:28:12.945 "write": true, 00:28:12.945 "unmap": true, 00:28:12.945 "flush": false, 00:28:12.945 "reset": true, 00:28:12.945 "nvme_admin": false, 00:28:12.945 "nvme_io": false, 00:28:12.945 "nvme_io_md": false, 00:28:12.945 "write_zeroes": true, 00:28:12.945 "zcopy": false, 00:28:12.945 "get_zone_info": false, 00:28:12.945 "zone_management": false, 00:28:12.945 "zone_append": false, 00:28:12.945 "compare": false, 00:28:12.945 "compare_and_write": false, 00:28:12.945 "abort": false, 00:28:12.945 "seek_hole": true, 00:28:12.945 "seek_data": true, 00:28:12.945 "copy": false, 00:28:12.945 "nvme_iov_md": false 00:28:12.945 }, 00:28:12.945 "driver_specific": { 00:28:12.945 "lvol": { 00:28:12.945 "lvol_store_uuid": "10766fc2-0077-4695-a36d-0d1cf9b9398b", 00:28:12.945 "base_bdev": "Nvme0n1", 00:28:12.945 "thin_provision": true, 00:28:12.945 "num_allocated_clusters": 0, 00:28:12.945 "snapshot": false, 00:28:12.945 "clone": false, 00:28:12.945 "esnap_clone": false 00:28:12.945 } 00:28:12.945 } 00:28:12.945 } 00:28:12.945 ] 00:28:12.945 10:43:16 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:12.945 10:43:16 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:28:12.945 10:43:16 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:28:13.203 [2024-07-25 10:43:16.725303] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:13.203 COMP_lvs0/lv0 00:28:13.203 10:43:16 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:13.203 10:43:16 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:28:13.203 10:43:16 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:13.203 10:43:16 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:13.203 10:43:16 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:13.203 10:43:16 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:13.203 10:43:16 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:13.460 10:43:16 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:13.716 [ 00:28:13.716 { 00:28:13.716 "name": "COMP_lvs0/lv0", 00:28:13.716 "aliases": [ 00:28:13.716 "38f096de-e88b-5d11-8031-521062290522" 00:28:13.716 ], 00:28:13.717 "product_name": "compress", 00:28:13.717 "block_size": 512, 00:28:13.717 "num_blocks": 200704, 00:28:13.717 "uuid": "38f096de-e88b-5d11-8031-521062290522", 00:28:13.717 "assigned_rate_limits": { 00:28:13.717 "rw_ios_per_sec": 0, 00:28:13.717 "rw_mbytes_per_sec": 0, 00:28:13.717 "r_mbytes_per_sec": 0, 00:28:13.717 "w_mbytes_per_sec": 0 00:28:13.717 }, 00:28:13.717 "claimed": false, 00:28:13.717 "zoned": false, 00:28:13.717 "supported_io_types": { 00:28:13.717 "read": true, 00:28:13.717 "write": true, 00:28:13.717 "unmap": false, 00:28:13.717 "flush": false, 00:28:13.717 "reset": false, 00:28:13.717 "nvme_admin": false, 00:28:13.717 "nvme_io": false, 00:28:13.717 "nvme_io_md": false, 00:28:13.717 "write_zeroes": true, 00:28:13.717 "zcopy": false, 00:28:13.717 "get_zone_info": false, 00:28:13.717 "zone_management": false, 00:28:13.717 "zone_append": false, 00:28:13.717 "compare": false, 00:28:13.717 "compare_and_write": false, 00:28:13.717 "abort": false, 00:28:13.717 "seek_hole": false, 00:28:13.717 "seek_data": false, 00:28:13.717 "copy": false, 00:28:13.717 "nvme_iov_md": false 00:28:13.717 }, 00:28:13.717 "driver_specific": { 00:28:13.717 "compress": { 00:28:13.717 "name": "COMP_lvs0/lv0", 00:28:13.717 "base_bdev_name": "8c9dfca4-7ca2-4e21-8a54-8562497bbd0a", 00:28:13.717 "pm_path": "/tmp/pmem/9980a322-ce35-42ca-aef3-a1ba0d60b231" 00:28:13.717 } 00:28:13.717 } 00:28:13.717 } 00:28:13.717 ] 00:28:13.717 10:43:17 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:13.717 10:43:17 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:13.717 [2024-07-25 10:43:17.311437] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f396c1b15c0 PMD being used: compress_qat 00:28:13.717 [2024-07-25 10:43:17.313310] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1610620 PMD being used: compress_qat 00:28:13.717 Running I/O for 3 seconds... 00:28:16.990 00:28:16.990 Latency(us) 00:28:16.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.990 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:28:16.990 Verification LBA range: start 0x0 length 0x3100 00:28:16.990 COMP_lvs0/lv0 : 3.01 3360.23 13.13 0.00 0.00 9481.12 144.12 14757.74 00:28:16.990 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:28:16.990 Verification LBA range: start 0x3100 length 0x3100 00:28:16.990 COMP_lvs0/lv0 : 3.01 3476.77 13.58 0.00 0.00 9164.45 138.05 16311.18 00:28:16.990 =================================================================================================================== 00:28:16.990 Total : 6837.00 26.71 0.00 0.00 9320.07 138.05 16311.18 00:28:16.990 0 00:28:16.990 10:43:20 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:28:16.990 10:43:20 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:16.990 10:43:20 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:17.247 10:43:20 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:17.247 10:43:20 compress_compdev -- compress/compress.sh@78 -- # killprocess 2486883 00:28:17.247 10:43:20 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 2486883 ']' 00:28:17.247 10:43:20 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 2486883 00:28:17.247 10:43:20 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:28:17.247 10:43:20 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:17.247 10:43:20 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2486883 00:28:17.505 10:43:20 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:17.505 10:43:20 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:17.505 10:43:20 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2486883' 00:28:17.505 killing process with pid 2486883 00:28:17.505 10:43:20 compress_compdev -- common/autotest_common.sh@969 -- # kill 2486883 00:28:17.505 Received shutdown signal, test time was about 3.000000 seconds 00:28:17.505 00:28:17.505 Latency(us) 00:28:17.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.505 =================================================================================================================== 00:28:17.505 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.505 10:43:20 compress_compdev -- common/autotest_common.sh@974 -- # wait 2486883 00:28:18.878 10:43:22 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:28:18.878 10:43:22 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:28:18.878 10:43:22 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2488368 00:28:18.878 10:43:22 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:28:18.878 10:43:22 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:18.878 10:43:22 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2488368 00:28:18.878 10:43:22 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 2488368 ']' 00:28:18.878 10:43:22 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.878 10:43:22 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:18.878 10:43:22 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.878 10:43:22 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:18.878 10:43:22 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:28:18.878 [2024-07-25 10:43:22.581325] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:18.878 [2024-07-25 10:43:22.581414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488368 ] 00:28:19.135 [2024-07-25 10:43:22.657616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:19.135 [2024-07-25 10:43:22.768124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.135 [2024-07-25 10:43:22.768126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.701 [2024-07-25 10:43:23.395644] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:28:19.958 10:43:23 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.959 10:43:23 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:28:19.959 10:43:23 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:28:19.959 10:43:23 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:19.959 10:43:23 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:23.235 [2024-07-25 10:43:26.640153] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1e9b120 PMD being used: compress_qat 00:28:23.235 10:43:26 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:23.235 10:43:26 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:28:23.235 10:43:26 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:23.235 10:43:26 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:23.235 10:43:26 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:23.235 10:43:26 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:23.235 10:43:26 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:23.493 10:43:26 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:23.493 [ 00:28:23.493 { 00:28:23.493 "name": "Nvme0n1", 00:28:23.493 "aliases": [ 00:28:23.493 "44444ed0-56f9-4eb6-9eaf-cef725a92068" 00:28:23.493 ], 00:28:23.493 "product_name": "NVMe disk", 00:28:23.493 "block_size": 512, 00:28:23.493 "num_blocks": 1953525168, 00:28:23.493 "uuid": "44444ed0-56f9-4eb6-9eaf-cef725a92068", 00:28:23.493 "assigned_rate_limits": { 00:28:23.493 "rw_ios_per_sec": 0, 00:28:23.493 "rw_mbytes_per_sec": 0, 00:28:23.493 "r_mbytes_per_sec": 0, 00:28:23.493 "w_mbytes_per_sec": 0 00:28:23.493 }, 00:28:23.493 "claimed": false, 00:28:23.493 "zoned": false, 00:28:23.493 "supported_io_types": { 00:28:23.493 "read": true, 00:28:23.493 "write": true, 00:28:23.493 "unmap": true, 00:28:23.493 "flush": true, 00:28:23.493 "reset": true, 00:28:23.493 "nvme_admin": true, 00:28:23.493 "nvme_io": true, 00:28:23.493 "nvme_io_md": false, 00:28:23.493 "write_zeroes": true, 00:28:23.493 "zcopy": false, 00:28:23.493 "get_zone_info": false, 00:28:23.493 "zone_management": false, 00:28:23.493 "zone_append": false, 00:28:23.493 "compare": false, 00:28:23.493 "compare_and_write": false, 00:28:23.493 "abort": true, 00:28:23.493 "seek_hole": false, 00:28:23.493 "seek_data": false, 00:28:23.493 "copy": false, 00:28:23.493 "nvme_iov_md": false 00:28:23.493 }, 00:28:23.493 "driver_specific": { 00:28:23.493 "nvme": [ 00:28:23.493 { 00:28:23.493 "pci_address": "0000:0b:00.0", 00:28:23.493 "trid": { 00:28:23.493 "trtype": "PCIe", 00:28:23.493 "traddr": "0000:0b:00.0" 00:28:23.493 }, 00:28:23.493 "ctrlr_data": { 00:28:23.493 "cntlid": 0, 00:28:23.493 "vendor_id": "0x8086", 00:28:23.493 "model_number": "INTEL SSDPE2KX010T8", 00:28:23.493 "serial_number": "BTLJ72430F4Q1P0FGN", 00:28:23.493 "firmware_revision": "VDV10184", 00:28:23.493 "oacs": { 00:28:23.493 "security": 1, 00:28:23.493 "format": 1, 00:28:23.493 "firmware": 1, 00:28:23.493 "ns_manage": 1 00:28:23.493 }, 00:28:23.493 "multi_ctrlr": false, 00:28:23.493 "ana_reporting": false 00:28:23.493 }, 00:28:23.493 "vs": { 00:28:23.493 "nvme_version": "1.2" 00:28:23.493 }, 00:28:23.493 "ns_data": { 00:28:23.493 "id": 1, 00:28:23.493 "can_share": false 00:28:23.493 }, 00:28:23.493 "security": { 00:28:23.493 "opal": true 00:28:23.493 } 00:28:23.493 } 00:28:23.493 ], 00:28:23.493 "mp_policy": "active_passive" 00:28:23.493 } 00:28:23.493 } 00:28:23.493 ] 00:28:23.493 10:43:27 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:23.493 10:43:27 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:23.750 [2024-07-25 10:43:27.417603] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1cd1ed0 PMD being used: compress_qat 00:28:24.682 5ed59733-549e-416c-b3f0-80c38d677977 00:28:24.682 10:43:28 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:24.940 d1657abc-d6b6-4423-834d-a32b67c3fe8a 00:28:24.940 10:43:28 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:24.940 10:43:28 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:28:24.940 10:43:28 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:24.940 10:43:28 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:24.940 10:43:28 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:24.940 10:43:28 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:24.940 10:43:28 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:25.505 10:43:28 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:25.505 [ 00:28:25.505 { 00:28:25.505 "name": "d1657abc-d6b6-4423-834d-a32b67c3fe8a", 00:28:25.505 "aliases": [ 00:28:25.505 "lvs0/lv0" 00:28:25.505 ], 00:28:25.505 "product_name": "Logical Volume", 00:28:25.505 "block_size": 512, 00:28:25.505 "num_blocks": 204800, 00:28:25.505 "uuid": "d1657abc-d6b6-4423-834d-a32b67c3fe8a", 00:28:25.505 "assigned_rate_limits": { 00:28:25.505 "rw_ios_per_sec": 0, 00:28:25.505 "rw_mbytes_per_sec": 0, 00:28:25.505 "r_mbytes_per_sec": 0, 00:28:25.505 "w_mbytes_per_sec": 0 00:28:25.505 }, 00:28:25.505 "claimed": false, 00:28:25.505 "zoned": false, 00:28:25.505 "supported_io_types": { 00:28:25.505 "read": true, 00:28:25.505 "write": true, 00:28:25.505 "unmap": true, 00:28:25.505 "flush": false, 00:28:25.505 "reset": true, 00:28:25.505 "nvme_admin": false, 00:28:25.505 "nvme_io": false, 00:28:25.505 "nvme_io_md": false, 00:28:25.505 "write_zeroes": true, 00:28:25.505 "zcopy": false, 00:28:25.505 "get_zone_info": false, 00:28:25.505 "zone_management": false, 00:28:25.505 "zone_append": false, 00:28:25.505 "compare": false, 00:28:25.505 "compare_and_write": false, 00:28:25.505 "abort": false, 00:28:25.505 "seek_hole": true, 00:28:25.505 "seek_data": true, 00:28:25.505 "copy": false, 00:28:25.505 "nvme_iov_md": false 00:28:25.505 }, 00:28:25.505 "driver_specific": { 00:28:25.505 "lvol": { 00:28:25.505 "lvol_store_uuid": "5ed59733-549e-416c-b3f0-80c38d677977", 00:28:25.505 "base_bdev": "Nvme0n1", 00:28:25.505 "thin_provision": true, 00:28:25.505 "num_allocated_clusters": 0, 00:28:25.505 "snapshot": false, 00:28:25.505 "clone": false, 00:28:25.505 "esnap_clone": false 00:28:25.505 } 00:28:25.505 } 00:28:25.505 } 00:28:25.505 ] 00:28:25.762 10:43:29 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:25.762 10:43:29 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:28:25.762 10:43:29 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:28:25.762 [2024-07-25 10:43:29.455340] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:25.762 COMP_lvs0/lv0 00:28:26.019 10:43:29 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:26.019 10:43:29 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:28:26.019 10:43:29 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:26.019 10:43:29 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:26.019 10:43:29 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:26.019 10:43:29 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:26.019 10:43:29 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:26.019 10:43:29 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:26.276 [ 00:28:26.276 { 00:28:26.276 "name": "COMP_lvs0/lv0", 00:28:26.276 "aliases": [ 00:28:26.276 "4eb91cee-b5f7-5e08-b83a-c67ab90255d9" 00:28:26.276 ], 00:28:26.276 "product_name": "compress", 00:28:26.276 "block_size": 512, 00:28:26.276 "num_blocks": 200704, 00:28:26.276 "uuid": "4eb91cee-b5f7-5e08-b83a-c67ab90255d9", 00:28:26.276 "assigned_rate_limits": { 00:28:26.276 "rw_ios_per_sec": 0, 00:28:26.276 "rw_mbytes_per_sec": 0, 00:28:26.276 "r_mbytes_per_sec": 0, 00:28:26.276 "w_mbytes_per_sec": 0 00:28:26.276 }, 00:28:26.276 "claimed": false, 00:28:26.276 "zoned": false, 00:28:26.276 "supported_io_types": { 00:28:26.276 "read": true, 00:28:26.276 "write": true, 00:28:26.276 "unmap": false, 00:28:26.276 "flush": false, 00:28:26.276 "reset": false, 00:28:26.276 "nvme_admin": false, 00:28:26.276 "nvme_io": false, 00:28:26.276 "nvme_io_md": false, 00:28:26.276 "write_zeroes": true, 00:28:26.276 "zcopy": false, 00:28:26.276 "get_zone_info": false, 00:28:26.276 "zone_management": false, 00:28:26.276 "zone_append": false, 00:28:26.276 "compare": false, 00:28:26.276 "compare_and_write": false, 00:28:26.276 "abort": false, 00:28:26.276 "seek_hole": false, 00:28:26.276 "seek_data": false, 00:28:26.276 "copy": false, 00:28:26.276 "nvme_iov_md": false 00:28:26.276 }, 00:28:26.276 "driver_specific": { 00:28:26.276 "compress": { 00:28:26.276 "name": "COMP_lvs0/lv0", 00:28:26.276 "base_bdev_name": "d1657abc-d6b6-4423-834d-a32b67c3fe8a", 00:28:26.276 "pm_path": "/tmp/pmem/d23005ef-0ce8-4b0d-8084-4cee0f4d4570" 00:28:26.276 } 00:28:26.276 } 00:28:26.276 } 00:28:26.276 ] 00:28:26.276 10:43:29 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:26.276 10:43:29 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:26.534 [2024-07-25 10:43:30.073627] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f59b01b15c0 PMD being used: compress_qat 00:28:26.534 [2024-07-25 10:43:30.075507] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1ec8620 PMD being used: compress_qat 00:28:26.534 Running I/O for 3 seconds... 00:28:29.845 00:28:29.845 Latency(us) 00:28:29.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.845 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:28:29.845 Verification LBA range: start 0x0 length 0x3100 00:28:29.845 COMP_lvs0/lv0 : 3.01 3397.30 13.27 0.00 0.00 9380.31 146.39 13883.92 00:28:29.845 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:28:29.845 Verification LBA range: start 0x3100 length 0x3100 00:28:29.845 COMP_lvs0/lv0 : 3.01 3512.29 13.72 0.00 0.00 9070.98 138.81 14078.10 00:28:29.845 =================================================================================================================== 00:28:29.845 Total : 6909.59 26.99 0.00 0.00 9223.10 138.81 14078.10 00:28:29.845 0 00:28:29.845 10:43:33 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:28:29.845 10:43:33 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:29.845 10:43:33 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:30.103 10:43:33 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:30.103 10:43:33 compress_compdev -- compress/compress.sh@78 -- # killprocess 2488368 00:28:30.103 10:43:33 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 2488368 ']' 00:28:30.103 10:43:33 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 2488368 00:28:30.103 10:43:33 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:28:30.103 10:43:33 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:30.103 10:43:33 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2488368 00:28:30.103 10:43:33 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:30.103 10:43:33 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:30.103 10:43:33 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2488368' 00:28:30.103 killing process with pid 2488368 00:28:30.103 10:43:33 compress_compdev -- common/autotest_common.sh@969 -- # kill 2488368 00:28:30.103 Received shutdown signal, test time was about 3.000000 seconds 00:28:30.103 00:28:30.103 Latency(us) 00:28:30.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.103 =================================================================================================================== 00:28:30.103 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.103 10:43:33 compress_compdev -- common/autotest_common.sh@974 -- # wait 2488368 00:28:31.998 10:43:35 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:28:31.998 10:43:35 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:28:31.998 10:43:35 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2489844 00:28:31.998 10:43:35 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:28:31.998 10:43:35 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:31.998 10:43:35 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2489844 00:28:31.998 10:43:35 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 2489844 ']' 00:28:31.998 10:43:35 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.998 10:43:35 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:31.998 10:43:35 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.998 10:43:35 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:31.998 10:43:35 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:28:31.998 [2024-07-25 10:43:35.300264] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:31.998 [2024-07-25 10:43:35.300356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489844 ] 00:28:31.998 [2024-07-25 10:43:35.383446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:31.998 [2024-07-25 10:43:35.504426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.998 [2024-07-25 10:43:35.504431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.563 [2024-07-25 10:43:36.135588] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:28:32.820 10:43:36 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.820 10:43:36 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:28:32.820 10:43:36 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:28:32.821 10:43:36 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:32.821 10:43:36 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:36.098 [2024-07-25 10:43:39.360070] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1681120 PMD being used: compress_qat 00:28:36.098 10:43:39 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:36.098 10:43:39 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:28:36.098 10:43:39 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:36.098 10:43:39 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:36.098 10:43:39 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:36.098 10:43:39 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:36.098 10:43:39 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:36.098 10:43:39 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:36.355 [ 00:28:36.355 { 00:28:36.355 "name": "Nvme0n1", 00:28:36.355 "aliases": [ 00:28:36.355 "50f0ded2-f35b-473c-9463-92739b0161ed" 00:28:36.355 ], 00:28:36.356 "product_name": "NVMe disk", 00:28:36.356 "block_size": 512, 00:28:36.356 "num_blocks": 1953525168, 00:28:36.356 "uuid": "50f0ded2-f35b-473c-9463-92739b0161ed", 00:28:36.356 "assigned_rate_limits": { 00:28:36.356 "rw_ios_per_sec": 0, 00:28:36.356 "rw_mbytes_per_sec": 0, 00:28:36.356 "r_mbytes_per_sec": 0, 00:28:36.356 "w_mbytes_per_sec": 0 00:28:36.356 }, 00:28:36.356 "claimed": false, 00:28:36.356 "zoned": false, 00:28:36.356 "supported_io_types": { 00:28:36.356 "read": true, 00:28:36.356 "write": true, 00:28:36.356 "unmap": true, 00:28:36.356 "flush": true, 00:28:36.356 "reset": true, 00:28:36.356 "nvme_admin": true, 00:28:36.356 "nvme_io": true, 00:28:36.356 "nvme_io_md": false, 00:28:36.356 "write_zeroes": true, 00:28:36.356 "zcopy": false, 00:28:36.356 "get_zone_info": false, 00:28:36.356 "zone_management": false, 00:28:36.356 "zone_append": false, 00:28:36.356 "compare": false, 00:28:36.356 "compare_and_write": false, 00:28:36.356 "abort": true, 00:28:36.356 "seek_hole": false, 00:28:36.356 "seek_data": false, 00:28:36.356 "copy": false, 00:28:36.356 "nvme_iov_md": false 00:28:36.356 }, 00:28:36.356 "driver_specific": { 00:28:36.356 "nvme": [ 00:28:36.356 { 00:28:36.356 "pci_address": "0000:0b:00.0", 00:28:36.356 "trid": { 00:28:36.356 "trtype": "PCIe", 00:28:36.356 "traddr": "0000:0b:00.0" 00:28:36.356 }, 00:28:36.356 "ctrlr_data": { 00:28:36.356 "cntlid": 0, 00:28:36.356 "vendor_id": "0x8086", 00:28:36.356 "model_number": "INTEL SSDPE2KX010T8", 00:28:36.356 "serial_number": "BTLJ72430F4Q1P0FGN", 00:28:36.356 "firmware_revision": "VDV10184", 00:28:36.356 "oacs": { 00:28:36.356 "security": 1, 00:28:36.356 "format": 1, 00:28:36.356 "firmware": 1, 00:28:36.356 "ns_manage": 1 00:28:36.356 }, 00:28:36.356 "multi_ctrlr": false, 00:28:36.356 "ana_reporting": false 00:28:36.356 }, 00:28:36.356 "vs": { 00:28:36.356 "nvme_version": "1.2" 00:28:36.356 }, 00:28:36.356 "ns_data": { 00:28:36.356 "id": 1, 00:28:36.356 "can_share": false 00:28:36.356 }, 00:28:36.356 "security": { 00:28:36.356 "opal": true 00:28:36.356 } 00:28:36.356 } 00:28:36.356 ], 00:28:36.356 "mp_policy": "active_passive" 00:28:36.356 } 00:28:36.356 } 00:28:36.356 ] 00:28:36.356 10:43:39 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:36.356 10:43:39 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:36.614 [2024-07-25 10:43:40.141639] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x14b7ed0 PMD being used: compress_qat 00:28:37.547 f79f3804-83fb-46fd-95fd-0dc37323e7ab 00:28:37.547 10:43:41 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:37.805 805b238e-c799-40fb-8605-22c80adbd2d2 00:28:37.805 10:43:41 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:37.805 10:43:41 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:28:37.805 10:43:41 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:37.805 10:43:41 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:37.805 10:43:41 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:37.805 10:43:41 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:37.805 10:43:41 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:38.063 10:43:41 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:38.320 [ 00:28:38.320 { 00:28:38.320 "name": "805b238e-c799-40fb-8605-22c80adbd2d2", 00:28:38.320 "aliases": [ 00:28:38.320 "lvs0/lv0" 00:28:38.320 ], 00:28:38.320 "product_name": "Logical Volume", 00:28:38.320 "block_size": 512, 00:28:38.320 "num_blocks": 204800, 00:28:38.320 "uuid": "805b238e-c799-40fb-8605-22c80adbd2d2", 00:28:38.320 "assigned_rate_limits": { 00:28:38.320 "rw_ios_per_sec": 0, 00:28:38.320 "rw_mbytes_per_sec": 0, 00:28:38.320 "r_mbytes_per_sec": 0, 00:28:38.320 "w_mbytes_per_sec": 0 00:28:38.320 }, 00:28:38.320 "claimed": false, 00:28:38.320 "zoned": false, 00:28:38.320 "supported_io_types": { 00:28:38.320 "read": true, 00:28:38.320 "write": true, 00:28:38.320 "unmap": true, 00:28:38.320 "flush": false, 00:28:38.320 "reset": true, 00:28:38.320 "nvme_admin": false, 00:28:38.320 "nvme_io": false, 00:28:38.320 "nvme_io_md": false, 00:28:38.320 "write_zeroes": true, 00:28:38.320 "zcopy": false, 00:28:38.320 "get_zone_info": false, 00:28:38.320 "zone_management": false, 00:28:38.320 "zone_append": false, 00:28:38.320 "compare": false, 00:28:38.320 "compare_and_write": false, 00:28:38.320 "abort": false, 00:28:38.320 "seek_hole": true, 00:28:38.320 "seek_data": true, 00:28:38.320 "copy": false, 00:28:38.320 "nvme_iov_md": false 00:28:38.320 }, 00:28:38.320 "driver_specific": { 00:28:38.320 "lvol": { 00:28:38.320 "lvol_store_uuid": "f79f3804-83fb-46fd-95fd-0dc37323e7ab", 00:28:38.320 "base_bdev": "Nvme0n1", 00:28:38.320 "thin_provision": true, 00:28:38.320 "num_allocated_clusters": 0, 00:28:38.320 "snapshot": false, 00:28:38.320 "clone": false, 00:28:38.320 "esnap_clone": false 00:28:38.320 } 00:28:38.320 } 00:28:38.320 } 00:28:38.320 ] 00:28:38.320 10:43:41 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:38.320 10:43:41 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:28:38.320 10:43:41 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:28:38.578 [2024-07-25 10:43:42.152583] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:38.578 COMP_lvs0/lv0 00:28:38.578 10:43:42 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:38.578 10:43:42 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:28:38.578 10:43:42 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:38.578 10:43:42 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:38.578 10:43:42 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:38.578 10:43:42 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:38.578 10:43:42 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:38.835 10:43:42 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:39.093 [ 00:28:39.093 { 00:28:39.093 "name": "COMP_lvs0/lv0", 00:28:39.093 "aliases": [ 00:28:39.093 "b611316d-f387-5c83-9c4c-8abe526dba24" 00:28:39.093 ], 00:28:39.093 "product_name": "compress", 00:28:39.093 "block_size": 4096, 00:28:39.093 "num_blocks": 25088, 00:28:39.093 "uuid": "b611316d-f387-5c83-9c4c-8abe526dba24", 00:28:39.093 "assigned_rate_limits": { 00:28:39.093 "rw_ios_per_sec": 0, 00:28:39.093 "rw_mbytes_per_sec": 0, 00:28:39.093 "r_mbytes_per_sec": 0, 00:28:39.093 "w_mbytes_per_sec": 0 00:28:39.093 }, 00:28:39.093 "claimed": false, 00:28:39.093 "zoned": false, 00:28:39.093 "supported_io_types": { 00:28:39.093 "read": true, 00:28:39.093 "write": true, 00:28:39.093 "unmap": false, 00:28:39.093 "flush": false, 00:28:39.093 "reset": false, 00:28:39.093 "nvme_admin": false, 00:28:39.093 "nvme_io": false, 00:28:39.093 "nvme_io_md": false, 00:28:39.093 "write_zeroes": true, 00:28:39.093 "zcopy": false, 00:28:39.093 "get_zone_info": false, 00:28:39.093 "zone_management": false, 00:28:39.093 "zone_append": false, 00:28:39.093 "compare": false, 00:28:39.093 "compare_and_write": false, 00:28:39.093 "abort": false, 00:28:39.093 "seek_hole": false, 00:28:39.093 "seek_data": false, 00:28:39.093 "copy": false, 00:28:39.093 "nvme_iov_md": false 00:28:39.093 }, 00:28:39.093 "driver_specific": { 00:28:39.093 "compress": { 00:28:39.093 "name": "COMP_lvs0/lv0", 00:28:39.093 "base_bdev_name": "805b238e-c799-40fb-8605-22c80adbd2d2", 00:28:39.093 "pm_path": "/tmp/pmem/3694ae4d-4a1f-4da7-b78c-7da2ed9ada87" 00:28:39.093 } 00:28:39.093 } 00:28:39.093 } 00:28:39.093 ] 00:28:39.093 10:43:42 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:39.093 10:43:42 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:39.093 [2024-07-25 10:43:42.770834] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f340c1b15c0 PMD being used: compress_qat 00:28:39.093 [2024-07-25 10:43:42.772734] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x16ae620 PMD being used: compress_qat 00:28:39.093 Running I/O for 3 seconds... 00:28:42.368 00:28:42.369 Latency(us) 00:28:42.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.369 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:28:42.369 Verification LBA range: start 0x0 length 0x3100 00:28:42.369 COMP_lvs0/lv0 : 3.01 3287.26 12.84 0.00 0.00 9675.67 193.42 15243.19 00:28:42.369 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:28:42.369 Verification LBA range: start 0x3100 length 0x3100 00:28:42.369 COMP_lvs0/lv0 : 3.01 3362.79 13.14 0.00 0.00 9468.54 187.35 15631.55 00:28:42.369 =================================================================================================================== 00:28:42.369 Total : 6650.05 25.98 0.00 0.00 9570.98 187.35 15631.55 00:28:42.369 0 00:28:42.369 10:43:45 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:28:42.369 10:43:45 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:42.625 10:43:46 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:42.882 10:43:46 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:42.882 10:43:46 compress_compdev -- compress/compress.sh@78 -- # killprocess 2489844 00:28:42.882 10:43:46 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 2489844 ']' 00:28:42.882 10:43:46 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 2489844 00:28:42.882 10:43:46 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:28:42.882 10:43:46 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:42.882 10:43:46 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2489844 00:28:42.882 10:43:46 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:42.882 10:43:46 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:42.882 10:43:46 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2489844' 00:28:42.882 killing process with pid 2489844 00:28:42.882 10:43:46 compress_compdev -- common/autotest_common.sh@969 -- # kill 2489844 00:28:42.882 Received shutdown signal, test time was about 3.000000 seconds 00:28:42.882 00:28:42.882 Latency(us) 00:28:42.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.882 =================================================================================================================== 00:28:42.882 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.882 10:43:46 compress_compdev -- common/autotest_common.sh@974 -- # wait 2489844 00:28:44.253 10:43:47 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:28:44.253 10:43:47 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:28:44.253 10:43:47 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=2491321 00:28:44.253 10:43:47 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:28:44.253 10:43:47 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:44.253 10:43:47 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 2491321 00:28:44.253 10:43:47 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 2491321 ']' 00:28:44.253 10:43:47 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.253 10:43:47 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:44.253 10:43:47 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.253 10:43:47 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:44.253 10:43:47 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:28:44.511 [2024-07-25 10:43:48.003166] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:44.511 [2024-07-25 10:43:48.003240] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491321 ] 00:28:44.511 [2024-07-25 10:43:48.086111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:44.511 [2024-07-25 10:43:48.205026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.511 [2024-07-25 10:43:48.205083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.511 [2024-07-25 10:43:48.205085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.443 [2024-07-25 10:43:48.866535] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:28:45.443 10:43:48 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:45.443 10:43:48 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:28:45.443 10:43:48 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:28:45.443 10:43:48 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:45.443 10:43:48 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:48.722 [2024-07-25 10:43:52.048119] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xeb0b40 PMD being used: compress_qat 00:28:48.722 10:43:52 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:48.722 10:43:52 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:28:48.722 10:43:52 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:48.722 10:43:52 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:48.722 10:43:52 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:48.722 10:43:52 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:48.722 10:43:52 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:48.722 10:43:52 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:48.979 [ 00:28:48.979 { 00:28:48.979 "name": "Nvme0n1", 00:28:48.979 "aliases": [ 00:28:48.979 "bceb59c1-1e96-4414-a569-b31d8105c578" 00:28:48.979 ], 00:28:48.979 "product_name": "NVMe disk", 00:28:48.979 "block_size": 512, 00:28:48.979 "num_blocks": 1953525168, 00:28:48.979 "uuid": "bceb59c1-1e96-4414-a569-b31d8105c578", 00:28:48.979 "assigned_rate_limits": { 00:28:48.979 "rw_ios_per_sec": 0, 00:28:48.979 "rw_mbytes_per_sec": 0, 00:28:48.979 "r_mbytes_per_sec": 0, 00:28:48.979 "w_mbytes_per_sec": 0 00:28:48.979 }, 00:28:48.979 "claimed": false, 00:28:48.979 "zoned": false, 00:28:48.979 "supported_io_types": { 00:28:48.979 "read": true, 00:28:48.979 "write": true, 00:28:48.979 "unmap": true, 00:28:48.979 "flush": true, 00:28:48.979 "reset": true, 00:28:48.979 "nvme_admin": true, 00:28:48.979 "nvme_io": true, 00:28:48.979 "nvme_io_md": false, 00:28:48.979 "write_zeroes": true, 00:28:48.979 "zcopy": false, 00:28:48.979 "get_zone_info": false, 00:28:48.979 "zone_management": false, 00:28:48.979 "zone_append": false, 00:28:48.979 "compare": false, 00:28:48.979 "compare_and_write": false, 00:28:48.979 "abort": true, 00:28:48.979 "seek_hole": false, 00:28:48.979 "seek_data": false, 00:28:48.979 "copy": false, 00:28:48.979 "nvme_iov_md": false 00:28:48.979 }, 00:28:48.979 "driver_specific": { 00:28:48.979 "nvme": [ 00:28:48.979 { 00:28:48.979 "pci_address": "0000:0b:00.0", 00:28:48.979 "trid": { 00:28:48.979 "trtype": "PCIe", 00:28:48.979 "traddr": "0000:0b:00.0" 00:28:48.979 }, 00:28:48.979 "ctrlr_data": { 00:28:48.979 "cntlid": 0, 00:28:48.979 "vendor_id": "0x8086", 00:28:48.979 "model_number": "INTEL SSDPE2KX010T8", 00:28:48.979 "serial_number": "BTLJ72430F4Q1P0FGN", 00:28:48.979 "firmware_revision": "VDV10184", 00:28:48.979 "oacs": { 00:28:48.979 "security": 1, 00:28:48.979 "format": 1, 00:28:48.979 "firmware": 1, 00:28:48.979 "ns_manage": 1 00:28:48.979 }, 00:28:48.979 "multi_ctrlr": false, 00:28:48.979 "ana_reporting": false 00:28:48.979 }, 00:28:48.979 "vs": { 00:28:48.979 "nvme_version": "1.2" 00:28:48.979 }, 00:28:48.979 "ns_data": { 00:28:48.979 "id": 1, 00:28:48.979 "can_share": false 00:28:48.979 }, 00:28:48.979 "security": { 00:28:48.979 "opal": true 00:28:48.979 } 00:28:48.979 } 00:28:48.979 ], 00:28:48.979 "mp_policy": "active_passive" 00:28:48.979 } 00:28:48.979 } 00:28:48.979 ] 00:28:48.979 10:43:52 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:48.979 10:43:52 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:49.237 [2024-07-25 10:43:52.829956] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xd158a0 PMD being used: compress_qat 00:28:50.168 ac29a0e9-5dc1-4b7e-8fcf-d9f215b17f9c 00:28:50.168 10:43:53 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:50.425 7c023309-ac31-4680-ac73-6aef46984b1d 00:28:50.425 10:43:53 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:50.425 10:43:53 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:28:50.425 10:43:53 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:50.425 10:43:53 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:50.425 10:43:53 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:50.425 10:43:53 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:50.425 10:43:53 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:50.709 10:43:54 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:50.967 [ 00:28:50.967 { 00:28:50.967 "name": "7c023309-ac31-4680-ac73-6aef46984b1d", 00:28:50.967 "aliases": [ 00:28:50.967 "lvs0/lv0" 00:28:50.967 ], 00:28:50.967 "product_name": "Logical Volume", 00:28:50.967 "block_size": 512, 00:28:50.967 "num_blocks": 204800, 00:28:50.967 "uuid": "7c023309-ac31-4680-ac73-6aef46984b1d", 00:28:50.967 "assigned_rate_limits": { 00:28:50.967 "rw_ios_per_sec": 0, 00:28:50.967 "rw_mbytes_per_sec": 0, 00:28:50.967 "r_mbytes_per_sec": 0, 00:28:50.967 "w_mbytes_per_sec": 0 00:28:50.967 }, 00:28:50.967 "claimed": false, 00:28:50.967 "zoned": false, 00:28:50.967 "supported_io_types": { 00:28:50.967 "read": true, 00:28:50.967 "write": true, 00:28:50.967 "unmap": true, 00:28:50.967 "flush": false, 00:28:50.967 "reset": true, 00:28:50.967 "nvme_admin": false, 00:28:50.967 "nvme_io": false, 00:28:50.967 "nvme_io_md": false, 00:28:50.967 "write_zeroes": true, 00:28:50.967 "zcopy": false, 00:28:50.967 "get_zone_info": false, 00:28:50.967 "zone_management": false, 00:28:50.967 "zone_append": false, 00:28:50.967 "compare": false, 00:28:50.967 "compare_and_write": false, 00:28:50.967 "abort": false, 00:28:50.967 "seek_hole": true, 00:28:50.967 "seek_data": true, 00:28:50.967 "copy": false, 00:28:50.967 "nvme_iov_md": false 00:28:50.967 }, 00:28:50.967 "driver_specific": { 00:28:50.967 "lvol": { 00:28:50.967 "lvol_store_uuid": "ac29a0e9-5dc1-4b7e-8fcf-d9f215b17f9c", 00:28:50.967 "base_bdev": "Nvme0n1", 00:28:50.967 "thin_provision": true, 00:28:50.967 "num_allocated_clusters": 0, 00:28:50.967 "snapshot": false, 00:28:50.967 "clone": false, 00:28:50.967 "esnap_clone": false 00:28:50.967 } 00:28:50.967 } 00:28:50.967 } 00:28:50.967 ] 00:28:50.967 10:43:54 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:50.967 10:43:54 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:28:50.967 10:43:54 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:28:51.225 [2024-07-25 10:43:54.761222] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:51.225 COMP_lvs0/lv0 00:28:51.225 10:43:54 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:51.225 10:43:54 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:28:51.225 10:43:54 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:51.225 10:43:54 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:28:51.225 10:43:54 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:51.225 10:43:54 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:51.225 10:43:54 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:51.496 10:43:55 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:51.800 [ 00:28:51.800 { 00:28:51.800 "name": "COMP_lvs0/lv0", 00:28:51.800 "aliases": [ 00:28:51.800 "3c1ae95c-4466-5d26-abee-37504a9f9c25" 00:28:51.800 ], 00:28:51.800 "product_name": "compress", 00:28:51.800 "block_size": 512, 00:28:51.800 "num_blocks": 200704, 00:28:51.800 "uuid": "3c1ae95c-4466-5d26-abee-37504a9f9c25", 00:28:51.800 "assigned_rate_limits": { 00:28:51.800 "rw_ios_per_sec": 0, 00:28:51.800 "rw_mbytes_per_sec": 0, 00:28:51.800 "r_mbytes_per_sec": 0, 00:28:51.800 "w_mbytes_per_sec": 0 00:28:51.800 }, 00:28:51.800 "claimed": false, 00:28:51.800 "zoned": false, 00:28:51.800 "supported_io_types": { 00:28:51.800 "read": true, 00:28:51.800 "write": true, 00:28:51.800 "unmap": false, 00:28:51.800 "flush": false, 00:28:51.800 "reset": false, 00:28:51.800 "nvme_admin": false, 00:28:51.800 "nvme_io": false, 00:28:51.801 "nvme_io_md": false, 00:28:51.801 "write_zeroes": true, 00:28:51.801 "zcopy": false, 00:28:51.801 "get_zone_info": false, 00:28:51.801 "zone_management": false, 00:28:51.801 "zone_append": false, 00:28:51.801 "compare": false, 00:28:51.801 "compare_and_write": false, 00:28:51.801 "abort": false, 00:28:51.801 "seek_hole": false, 00:28:51.801 "seek_data": false, 00:28:51.801 "copy": false, 00:28:51.801 "nvme_iov_md": false 00:28:51.801 }, 00:28:51.801 "driver_specific": { 00:28:51.801 "compress": { 00:28:51.801 "name": "COMP_lvs0/lv0", 00:28:51.801 "base_bdev_name": "7c023309-ac31-4680-ac73-6aef46984b1d", 00:28:51.801 "pm_path": "/tmp/pmem/9a16d5ec-1c89-44c8-a98f-c9aa663a3794" 00:28:51.801 } 00:28:51.801 } 00:28:51.801 } 00:28:51.801 ] 00:28:51.801 10:43:55 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:28:51.801 10:43:55 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:51.801 [2024-07-25 10:43:55.418823] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7ff8b01b1350 PMD being used: compress_qat 00:28:51.801 I/O targets: 00:28:51.801 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:28:51.801 00:28:51.801 00:28:51.801 CUnit - A unit testing framework for C - Version 2.1-3 00:28:51.801 http://cunit.sourceforge.net/ 00:28:51.801 00:28:51.801 00:28:51.801 Suite: bdevio tests on: COMP_lvs0/lv0 00:28:51.801 Test: blockdev write read block ...passed 00:28:51.801 Test: blockdev write zeroes read block ...passed 00:28:51.801 Test: blockdev write zeroes read no split ...passed 00:28:51.801 Test: blockdev write zeroes read split ...passed 00:28:51.801 Test: blockdev write zeroes read split partial ...passed 00:28:51.801 Test: blockdev reset ...[2024-07-25 10:43:55.481491] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:28:51.801 passed 00:28:51.801 Test: blockdev write read 8 blocks ...passed 00:28:51.801 Test: blockdev write read size > 128k ...passed 00:28:51.801 Test: blockdev write read invalid size ...passed 00:28:51.801 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:51.801 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:51.801 Test: blockdev write read max offset ...passed 00:28:51.801 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:51.801 Test: blockdev writev readv 8 blocks ...passed 00:28:51.801 Test: blockdev writev readv 30 x 1block ...passed 00:28:51.801 Test: blockdev writev readv block ...passed 00:28:51.801 Test: blockdev writev readv size > 128k ...passed 00:28:51.801 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:51.801 Test: blockdev comparev and writev ...passed 00:28:51.801 Test: blockdev nvme passthru rw ...passed 00:28:51.801 Test: blockdev nvme passthru vendor specific ...passed 00:28:51.801 Test: blockdev nvme admin passthru ...passed 00:28:51.801 Test: blockdev copy ...passed 00:28:51.801 00:28:51.801 Run Summary: Type Total Ran Passed Failed Inactive 00:28:51.801 suites 1 1 n/a 0 0 00:28:51.801 tests 23 23 23 0 0 00:28:51.801 asserts 130 130 130 0 n/a 00:28:51.801 00:28:51.801 Elapsed time = 0.187 seconds 00:28:52.058 0 00:28:52.058 10:43:55 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:28:52.058 10:43:55 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:52.316 10:43:55 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:52.573 10:43:56 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:28:52.573 10:43:56 compress_compdev -- compress/compress.sh@62 -- # killprocess 2491321 00:28:52.573 10:43:56 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 2491321 ']' 00:28:52.573 10:43:56 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 2491321 00:28:52.573 10:43:56 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:28:52.573 10:43:56 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:52.573 10:43:56 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2491321 00:28:52.573 10:43:56 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:52.573 10:43:56 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:52.573 10:43:56 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2491321' 00:28:52.573 killing process with pid 2491321 00:28:52.573 10:43:56 compress_compdev -- common/autotest_common.sh@969 -- # kill 2491321 00:28:52.573 10:43:56 compress_compdev -- common/autotest_common.sh@974 -- # wait 2491321 00:28:53.944 10:43:57 compress_compdev -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:28:53.944 10:43:57 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:28:53.944 00:28:53.944 real 0m47.740s 00:28:53.944 user 1m49.890s 00:28:53.944 sys 0m4.879s 00:28:53.944 10:43:57 compress_compdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:53.944 10:43:57 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:28:53.944 ************************************ 00:28:53.944 END TEST compress_compdev 00:28:53.944 ************************************ 00:28:54.202 10:43:57 -- spdk/autotest.sh@353 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:28:54.202 10:43:57 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:54.202 10:43:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:54.202 10:43:57 -- common/autotest_common.sh@10 -- # set +x 00:28:54.202 ************************************ 00:28:54.202 START TEST compress_isal 00:28:54.202 ************************************ 00:28:54.202 10:43:57 compress_isal -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:28:54.202 * Looking for test storage... 00:28:54.202 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:28:54.202 10:43:57 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:28:54.202 10:43:57 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.202 10:43:57 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.202 10:43:57 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.202 10:43:57 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.202 10:43:57 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.202 10:43:57 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.202 10:43:57 compress_isal -- paths/export.sh@5 -- # export PATH 00:28:54.202 10:43:57 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@47 -- # : 0 00:28:54.202 10:43:57 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:54.203 10:43:57 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:54.203 10:43:57 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.203 10:43:57 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.203 10:43:57 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.203 10:43:57 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:54.203 10:43:57 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:54.203 10:43:57 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:54.203 10:43:57 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:54.203 10:43:57 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:28:54.203 10:43:57 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:28:54.203 10:43:57 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:28:54.203 10:43:57 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:28:54.203 10:43:57 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2492580 00:28:54.203 10:43:57 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:28:54.203 10:43:57 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:54.203 10:43:57 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2492580 00:28:54.203 10:43:57 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 2492580 ']' 00:28:54.203 10:43:57 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.203 10:43:57 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:54.203 10:43:57 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.203 10:43:57 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:54.203 10:43:57 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:28:54.203 [2024-07-25 10:43:57.816296] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:54.203 [2024-07-25 10:43:57.816379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492580 ] 00:28:54.203 [2024-07-25 10:43:57.892194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:54.461 [2024-07-25 10:43:58.001590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.461 [2024-07-25 10:43:58.001593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.461 10:43:58 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:54.461 10:43:58 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:28:54.461 10:43:58 compress_isal -- compress/compress.sh@74 -- # create_vols 00:28:54.461 10:43:58 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:54.461 10:43:58 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:57.737 10:44:01 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:57.737 10:44:01 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:28:57.737 10:44:01 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:57.737 10:44:01 compress_isal -- common/autotest_common.sh@901 -- # local i 00:28:57.737 10:44:01 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:57.737 10:44:01 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:57.737 10:44:01 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:57.994 10:44:01 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:58.251 [ 00:28:58.251 { 00:28:58.251 "name": "Nvme0n1", 00:28:58.251 "aliases": [ 00:28:58.251 "f33abbca-0b79-4aed-baaf-046877600ebe" 00:28:58.251 ], 00:28:58.251 "product_name": "NVMe disk", 00:28:58.251 "block_size": 512, 00:28:58.251 "num_blocks": 1953525168, 00:28:58.251 "uuid": "f33abbca-0b79-4aed-baaf-046877600ebe", 00:28:58.251 "assigned_rate_limits": { 00:28:58.251 "rw_ios_per_sec": 0, 00:28:58.251 "rw_mbytes_per_sec": 0, 00:28:58.251 "r_mbytes_per_sec": 0, 00:28:58.251 "w_mbytes_per_sec": 0 00:28:58.251 }, 00:28:58.251 "claimed": false, 00:28:58.251 "zoned": false, 00:28:58.251 "supported_io_types": { 00:28:58.251 "read": true, 00:28:58.251 "write": true, 00:28:58.251 "unmap": true, 00:28:58.251 "flush": true, 00:28:58.251 "reset": true, 00:28:58.251 "nvme_admin": true, 00:28:58.251 "nvme_io": true, 00:28:58.251 "nvme_io_md": false, 00:28:58.251 "write_zeroes": true, 00:28:58.251 "zcopy": false, 00:28:58.251 "get_zone_info": false, 00:28:58.251 "zone_management": false, 00:28:58.251 "zone_append": false, 00:28:58.251 "compare": false, 00:28:58.251 "compare_and_write": false, 00:28:58.251 "abort": true, 00:28:58.251 "seek_hole": false, 00:28:58.251 "seek_data": false, 00:28:58.251 "copy": false, 00:28:58.251 "nvme_iov_md": false 00:28:58.251 }, 00:28:58.251 "driver_specific": { 00:28:58.251 "nvme": [ 00:28:58.251 { 00:28:58.251 "pci_address": "0000:0b:00.0", 00:28:58.251 "trid": { 00:28:58.251 "trtype": "PCIe", 00:28:58.251 "traddr": "0000:0b:00.0" 00:28:58.251 }, 00:28:58.251 "ctrlr_data": { 00:28:58.251 "cntlid": 0, 00:28:58.251 "vendor_id": "0x8086", 00:28:58.251 "model_number": "INTEL SSDPE2KX010T8", 00:28:58.251 "serial_number": "BTLJ72430F4Q1P0FGN", 00:28:58.251 "firmware_revision": "VDV10184", 00:28:58.251 "oacs": { 00:28:58.251 "security": 1, 00:28:58.251 "format": 1, 00:28:58.251 "firmware": 1, 00:28:58.251 "ns_manage": 1 00:28:58.251 }, 00:28:58.251 "multi_ctrlr": false, 00:28:58.251 "ana_reporting": false 00:28:58.251 }, 00:28:58.251 "vs": { 00:28:58.251 "nvme_version": "1.2" 00:28:58.251 }, 00:28:58.251 "ns_data": { 00:28:58.251 "id": 1, 00:28:58.251 "can_share": false 00:28:58.251 }, 00:28:58.251 "security": { 00:28:58.251 "opal": true 00:28:58.251 } 00:28:58.251 } 00:28:58.251 ], 00:28:58.251 "mp_policy": "active_passive" 00:28:58.251 } 00:28:58.251 } 00:28:58.251 ] 00:28:58.251 10:44:01 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:28:58.251 10:44:01 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:59.621 03fee836-64be-41aa-9248-5902aec56fe3 00:28:59.621 10:44:02 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:59.621 c82ceb83-0fa9-461b-934f-3d5c882352ce 00:28:59.622 10:44:03 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:59.622 10:44:03 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:28:59.622 10:44:03 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:59.622 10:44:03 compress_isal -- common/autotest_common.sh@901 -- # local i 00:28:59.622 10:44:03 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:59.622 10:44:03 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:59.622 10:44:03 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:59.878 10:44:03 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:00.135 [ 00:29:00.135 { 00:29:00.135 "name": "c82ceb83-0fa9-461b-934f-3d5c882352ce", 00:29:00.135 "aliases": [ 00:29:00.135 "lvs0/lv0" 00:29:00.135 ], 00:29:00.135 "product_name": "Logical Volume", 00:29:00.135 "block_size": 512, 00:29:00.135 "num_blocks": 204800, 00:29:00.135 "uuid": "c82ceb83-0fa9-461b-934f-3d5c882352ce", 00:29:00.135 "assigned_rate_limits": { 00:29:00.135 "rw_ios_per_sec": 0, 00:29:00.135 "rw_mbytes_per_sec": 0, 00:29:00.135 "r_mbytes_per_sec": 0, 00:29:00.135 "w_mbytes_per_sec": 0 00:29:00.135 }, 00:29:00.135 "claimed": false, 00:29:00.135 "zoned": false, 00:29:00.135 "supported_io_types": { 00:29:00.135 "read": true, 00:29:00.135 "write": true, 00:29:00.135 "unmap": true, 00:29:00.135 "flush": false, 00:29:00.135 "reset": true, 00:29:00.135 "nvme_admin": false, 00:29:00.135 "nvme_io": false, 00:29:00.135 "nvme_io_md": false, 00:29:00.135 "write_zeroes": true, 00:29:00.135 "zcopy": false, 00:29:00.135 "get_zone_info": false, 00:29:00.135 "zone_management": false, 00:29:00.135 "zone_append": false, 00:29:00.135 "compare": false, 00:29:00.135 "compare_and_write": false, 00:29:00.135 "abort": false, 00:29:00.135 "seek_hole": true, 00:29:00.135 "seek_data": true, 00:29:00.135 "copy": false, 00:29:00.135 "nvme_iov_md": false 00:29:00.135 }, 00:29:00.135 "driver_specific": { 00:29:00.135 "lvol": { 00:29:00.135 "lvol_store_uuid": "03fee836-64be-41aa-9248-5902aec56fe3", 00:29:00.135 "base_bdev": "Nvme0n1", 00:29:00.135 "thin_provision": true, 00:29:00.135 "num_allocated_clusters": 0, 00:29:00.135 "snapshot": false, 00:29:00.135 "clone": false, 00:29:00.135 "esnap_clone": false 00:29:00.135 } 00:29:00.135 } 00:29:00.135 } 00:29:00.135 ] 00:29:00.135 10:44:03 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:00.135 10:44:03 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:29:00.135 10:44:03 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:29:00.392 [2024-07-25 10:44:04.003499] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:00.392 COMP_lvs0/lv0 00:29:00.392 10:44:04 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:00.392 10:44:04 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:00.392 10:44:04 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:00.392 10:44:04 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:00.392 10:44:04 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:00.392 10:44:04 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:00.392 10:44:04 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:00.649 10:44:04 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:00.906 [ 00:29:00.906 { 00:29:00.906 "name": "COMP_lvs0/lv0", 00:29:00.906 "aliases": [ 00:29:00.906 "4a904660-3eb5-509a-bd8b-3ca956e7c43d" 00:29:00.906 ], 00:29:00.906 "product_name": "compress", 00:29:00.906 "block_size": 512, 00:29:00.906 "num_blocks": 200704, 00:29:00.906 "uuid": "4a904660-3eb5-509a-bd8b-3ca956e7c43d", 00:29:00.906 "assigned_rate_limits": { 00:29:00.906 "rw_ios_per_sec": 0, 00:29:00.906 "rw_mbytes_per_sec": 0, 00:29:00.906 "r_mbytes_per_sec": 0, 00:29:00.906 "w_mbytes_per_sec": 0 00:29:00.906 }, 00:29:00.906 "claimed": false, 00:29:00.906 "zoned": false, 00:29:00.906 "supported_io_types": { 00:29:00.906 "read": true, 00:29:00.906 "write": true, 00:29:00.906 "unmap": false, 00:29:00.906 "flush": false, 00:29:00.906 "reset": false, 00:29:00.906 "nvme_admin": false, 00:29:00.906 "nvme_io": false, 00:29:00.906 "nvme_io_md": false, 00:29:00.906 "write_zeroes": true, 00:29:00.906 "zcopy": false, 00:29:00.906 "get_zone_info": false, 00:29:00.906 "zone_management": false, 00:29:00.906 "zone_append": false, 00:29:00.906 "compare": false, 00:29:00.906 "compare_and_write": false, 00:29:00.906 "abort": false, 00:29:00.906 "seek_hole": false, 00:29:00.906 "seek_data": false, 00:29:00.906 "copy": false, 00:29:00.906 "nvme_iov_md": false 00:29:00.906 }, 00:29:00.906 "driver_specific": { 00:29:00.906 "compress": { 00:29:00.906 "name": "COMP_lvs0/lv0", 00:29:00.906 "base_bdev_name": "c82ceb83-0fa9-461b-934f-3d5c882352ce", 00:29:00.906 "pm_path": "/tmp/pmem/f26bfdd6-5d15-46ec-9cb4-53fac60c6ca2" 00:29:00.906 } 00:29:00.906 } 00:29:00.906 } 00:29:00.906 ] 00:29:00.906 10:44:04 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:00.906 10:44:04 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:01.163 Running I/O for 3 seconds... 00:29:04.439 00:29:04.439 Latency(us) 00:29:04.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.439 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:29:04.439 Verification LBA range: start 0x0 length 0x3100 00:29:04.439 COMP_lvs0/lv0 : 3.01 2723.88 10.64 0.00 0.00 11695.68 77.75 17379.18 00:29:04.439 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:29:04.439 Verification LBA range: start 0x3100 length 0x3100 00:29:04.439 COMP_lvs0/lv0 : 3.01 2723.83 10.64 0.00 0.00 11689.67 84.95 17282.09 00:29:04.439 =================================================================================================================== 00:29:04.439 Total : 5447.71 21.28 0.00 0.00 11692.67 77.75 17379.18 00:29:04.439 0 00:29:04.439 10:44:07 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:29:04.439 10:44:07 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:04.439 10:44:07 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:04.697 10:44:08 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:04.697 10:44:08 compress_isal -- compress/compress.sh@78 -- # killprocess 2492580 00:29:04.697 10:44:08 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 2492580 ']' 00:29:04.697 10:44:08 compress_isal -- common/autotest_common.sh@954 -- # kill -0 2492580 00:29:04.697 10:44:08 compress_isal -- common/autotest_common.sh@955 -- # uname 00:29:04.697 10:44:08 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:04.697 10:44:08 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2492580 00:29:04.697 10:44:08 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:04.697 10:44:08 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:04.697 10:44:08 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2492580' 00:29:04.697 killing process with pid 2492580 00:29:04.697 10:44:08 compress_isal -- common/autotest_common.sh@969 -- # kill 2492580 00:29:04.697 Received shutdown signal, test time was about 3.000000 seconds 00:29:04.697 00:29:04.697 Latency(us) 00:29:04.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.697 =================================================================================================================== 00:29:04.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.697 10:44:08 compress_isal -- common/autotest_common.sh@974 -- # wait 2492580 00:29:06.593 10:44:09 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:29:06.593 10:44:09 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:29:06.593 10:44:09 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2493935 00:29:06.593 10:44:09 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:29:06.593 10:44:09 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:06.593 10:44:09 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2493935 00:29:06.593 10:44:09 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 2493935 ']' 00:29:06.593 10:44:09 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.593 10:44:09 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.593 10:44:09 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.593 10:44:09 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.593 10:44:09 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:29:06.593 [2024-07-25 10:44:09.867384] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:29:06.593 [2024-07-25 10:44:09.867477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493935 ] 00:29:06.593 [2024-07-25 10:44:09.949045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:06.593 [2024-07-25 10:44:10.071034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.593 [2024-07-25 10:44:10.071051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.157 10:44:10 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.157 10:44:10 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:29:07.157 10:44:10 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:29:07.157 10:44:10 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:07.157 10:44:10 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:10.427 10:44:13 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:10.427 10:44:13 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:10.427 10:44:13 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:10.427 10:44:13 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:10.427 10:44:13 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:10.427 10:44:13 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:10.428 10:44:13 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:10.685 10:44:14 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:10.685 [ 00:29:10.685 { 00:29:10.685 "name": "Nvme0n1", 00:29:10.685 "aliases": [ 00:29:10.685 "215493b3-697f-4b1a-bc67-b2c0a8bf01cc" 00:29:10.685 ], 00:29:10.685 "product_name": "NVMe disk", 00:29:10.685 "block_size": 512, 00:29:10.685 "num_blocks": 1953525168, 00:29:10.685 "uuid": "215493b3-697f-4b1a-bc67-b2c0a8bf01cc", 00:29:10.685 "assigned_rate_limits": { 00:29:10.685 "rw_ios_per_sec": 0, 00:29:10.685 "rw_mbytes_per_sec": 0, 00:29:10.685 "r_mbytes_per_sec": 0, 00:29:10.685 "w_mbytes_per_sec": 0 00:29:10.685 }, 00:29:10.685 "claimed": false, 00:29:10.685 "zoned": false, 00:29:10.685 "supported_io_types": { 00:29:10.685 "read": true, 00:29:10.685 "write": true, 00:29:10.685 "unmap": true, 00:29:10.685 "flush": true, 00:29:10.686 "reset": true, 00:29:10.686 "nvme_admin": true, 00:29:10.686 "nvme_io": true, 00:29:10.686 "nvme_io_md": false, 00:29:10.686 "write_zeroes": true, 00:29:10.686 "zcopy": false, 00:29:10.686 "get_zone_info": false, 00:29:10.686 "zone_management": false, 00:29:10.686 "zone_append": false, 00:29:10.686 "compare": false, 00:29:10.686 "compare_and_write": false, 00:29:10.686 "abort": true, 00:29:10.686 "seek_hole": false, 00:29:10.686 "seek_data": false, 00:29:10.686 "copy": false, 00:29:10.686 "nvme_iov_md": false 00:29:10.686 }, 00:29:10.686 "driver_specific": { 00:29:10.686 "nvme": [ 00:29:10.686 { 00:29:10.686 "pci_address": "0000:0b:00.0", 00:29:10.686 "trid": { 00:29:10.686 "trtype": "PCIe", 00:29:10.686 "traddr": "0000:0b:00.0" 00:29:10.686 }, 00:29:10.686 "ctrlr_data": { 00:29:10.686 "cntlid": 0, 00:29:10.686 "vendor_id": "0x8086", 00:29:10.686 "model_number": "INTEL SSDPE2KX010T8", 00:29:10.686 "serial_number": "BTLJ72430F4Q1P0FGN", 00:29:10.686 "firmware_revision": "VDV10184", 00:29:10.686 "oacs": { 00:29:10.686 "security": 1, 00:29:10.686 "format": 1, 00:29:10.686 "firmware": 1, 00:29:10.686 "ns_manage": 1 00:29:10.686 }, 00:29:10.686 "multi_ctrlr": false, 00:29:10.686 "ana_reporting": false 00:29:10.686 }, 00:29:10.686 "vs": { 00:29:10.686 "nvme_version": "1.2" 00:29:10.686 }, 00:29:10.686 "ns_data": { 00:29:10.686 "id": 1, 00:29:10.686 "can_share": false 00:29:10.686 }, 00:29:10.686 "security": { 00:29:10.686 "opal": true 00:29:10.686 } 00:29:10.686 } 00:29:10.686 ], 00:29:10.686 "mp_policy": "active_passive" 00:29:10.686 } 00:29:10.686 } 00:29:10.686 ] 00:29:10.686 10:44:14 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:10.686 10:44:14 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:12.227 1d0511fa-e9da-43bc-aa96-8538da09c829 00:29:12.227 10:44:15 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:12.227 09538265-06ee-45cc-a059-2ab5bc2776b0 00:29:12.227 10:44:15 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:12.227 10:44:15 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:12.227 10:44:15 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:12.227 10:44:15 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:12.227 10:44:15 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:12.227 10:44:15 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:12.227 10:44:15 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:12.484 10:44:15 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:12.742 [ 00:29:12.742 { 00:29:12.742 "name": "09538265-06ee-45cc-a059-2ab5bc2776b0", 00:29:12.742 "aliases": [ 00:29:12.742 "lvs0/lv0" 00:29:12.742 ], 00:29:12.742 "product_name": "Logical Volume", 00:29:12.742 "block_size": 512, 00:29:12.742 "num_blocks": 204800, 00:29:12.742 "uuid": "09538265-06ee-45cc-a059-2ab5bc2776b0", 00:29:12.742 "assigned_rate_limits": { 00:29:12.742 "rw_ios_per_sec": 0, 00:29:12.742 "rw_mbytes_per_sec": 0, 00:29:12.742 "r_mbytes_per_sec": 0, 00:29:12.742 "w_mbytes_per_sec": 0 00:29:12.742 }, 00:29:12.742 "claimed": false, 00:29:12.742 "zoned": false, 00:29:12.742 "supported_io_types": { 00:29:12.742 "read": true, 00:29:12.742 "write": true, 00:29:12.742 "unmap": true, 00:29:12.742 "flush": false, 00:29:12.742 "reset": true, 00:29:12.742 "nvme_admin": false, 00:29:12.742 "nvme_io": false, 00:29:12.742 "nvme_io_md": false, 00:29:12.742 "write_zeroes": true, 00:29:12.742 "zcopy": false, 00:29:12.742 "get_zone_info": false, 00:29:12.742 "zone_management": false, 00:29:12.742 "zone_append": false, 00:29:12.742 "compare": false, 00:29:12.742 "compare_and_write": false, 00:29:12.742 "abort": false, 00:29:12.742 "seek_hole": true, 00:29:12.742 "seek_data": true, 00:29:12.742 "copy": false, 00:29:12.742 "nvme_iov_md": false 00:29:12.742 }, 00:29:12.742 "driver_specific": { 00:29:12.742 "lvol": { 00:29:12.742 "lvol_store_uuid": "1d0511fa-e9da-43bc-aa96-8538da09c829", 00:29:12.742 "base_bdev": "Nvme0n1", 00:29:12.742 "thin_provision": true, 00:29:12.742 "num_allocated_clusters": 0, 00:29:12.742 "snapshot": false, 00:29:12.742 "clone": false, 00:29:12.742 "esnap_clone": false 00:29:12.742 } 00:29:12.742 } 00:29:12.742 } 00:29:12.742 ] 00:29:12.742 10:44:16 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:12.742 10:44:16 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:29:12.742 10:44:16 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:29:12.999 [2024-07-25 10:44:16.462791] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:12.999 COMP_lvs0/lv0 00:29:12.999 10:44:16 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:12.999 10:44:16 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:12.999 10:44:16 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:12.999 10:44:16 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:12.999 10:44:16 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:12.999 10:44:16 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:12.999 10:44:16 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:13.256 10:44:16 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:13.256 [ 00:29:13.256 { 00:29:13.256 "name": "COMP_lvs0/lv0", 00:29:13.256 "aliases": [ 00:29:13.256 "80e1ddac-6f67-53dc-b7c9-4aeb09b5ad46" 00:29:13.256 ], 00:29:13.256 "product_name": "compress", 00:29:13.256 "block_size": 512, 00:29:13.256 "num_blocks": 200704, 00:29:13.256 "uuid": "80e1ddac-6f67-53dc-b7c9-4aeb09b5ad46", 00:29:13.256 "assigned_rate_limits": { 00:29:13.256 "rw_ios_per_sec": 0, 00:29:13.256 "rw_mbytes_per_sec": 0, 00:29:13.256 "r_mbytes_per_sec": 0, 00:29:13.256 "w_mbytes_per_sec": 0 00:29:13.256 }, 00:29:13.256 "claimed": false, 00:29:13.256 "zoned": false, 00:29:13.256 "supported_io_types": { 00:29:13.256 "read": true, 00:29:13.256 "write": true, 00:29:13.256 "unmap": false, 00:29:13.256 "flush": false, 00:29:13.256 "reset": false, 00:29:13.256 "nvme_admin": false, 00:29:13.256 "nvme_io": false, 00:29:13.256 "nvme_io_md": false, 00:29:13.256 "write_zeroes": true, 00:29:13.256 "zcopy": false, 00:29:13.256 "get_zone_info": false, 00:29:13.256 "zone_management": false, 00:29:13.256 "zone_append": false, 00:29:13.256 "compare": false, 00:29:13.256 "compare_and_write": false, 00:29:13.256 "abort": false, 00:29:13.256 "seek_hole": false, 00:29:13.256 "seek_data": false, 00:29:13.256 "copy": false, 00:29:13.256 "nvme_iov_md": false 00:29:13.256 }, 00:29:13.256 "driver_specific": { 00:29:13.256 "compress": { 00:29:13.256 "name": "COMP_lvs0/lv0", 00:29:13.256 "base_bdev_name": "09538265-06ee-45cc-a059-2ab5bc2776b0", 00:29:13.256 "pm_path": "/tmp/pmem/04064fee-69bf-4c2a-a44e-66c3531f8d06" 00:29:13.256 } 00:29:13.256 } 00:29:13.256 } 00:29:13.256 ] 00:29:13.256 10:44:16 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:13.256 10:44:16 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:13.513 Running I/O for 3 seconds... 00:29:16.834 00:29:16.834 Latency(us) 00:29:16.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.834 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:29:16.834 Verification LBA range: start 0x0 length 0x3100 00:29:16.834 COMP_lvs0/lv0 : 3.01 2706.43 10.57 0.00 0.00 11767.07 81.54 18058.81 00:29:16.834 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:29:16.834 Verification LBA range: start 0x3100 length 0x3100 00:29:16.834 COMP_lvs0/lv0 : 3.01 2717.74 10.62 0.00 0.00 11717.73 81.54 17670.45 00:29:16.834 =================================================================================================================== 00:29:16.834 Total : 5424.17 21.19 0.00 0.00 11742.35 81.54 18058.81 00:29:16.834 0 00:29:16.834 10:44:20 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:29:16.834 10:44:20 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:16.834 10:44:20 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:17.092 10:44:20 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:17.092 10:44:20 compress_isal -- compress/compress.sh@78 -- # killprocess 2493935 00:29:17.092 10:44:20 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 2493935 ']' 00:29:17.092 10:44:20 compress_isal -- common/autotest_common.sh@954 -- # kill -0 2493935 00:29:17.092 10:44:20 compress_isal -- common/autotest_common.sh@955 -- # uname 00:29:17.092 10:44:20 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:17.092 10:44:20 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2493935 00:29:17.092 10:44:20 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:17.092 10:44:20 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:17.092 10:44:20 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2493935' 00:29:17.092 killing process with pid 2493935 00:29:17.092 10:44:20 compress_isal -- common/autotest_common.sh@969 -- # kill 2493935 00:29:17.092 Received shutdown signal, test time was about 3.000000 seconds 00:29:17.092 00:29:17.092 Latency(us) 00:29:17.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.092 =================================================================================================================== 00:29:17.092 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.092 10:44:20 compress_isal -- common/autotest_common.sh@974 -- # wait 2493935 00:29:18.522 10:44:22 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:29:18.522 10:44:22 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:29:18.522 10:44:22 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2495408 00:29:18.522 10:44:22 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:29:18.522 10:44:22 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:18.522 10:44:22 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2495408 00:29:18.522 10:44:22 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 2495408 ']' 00:29:18.522 10:44:22 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.522 10:44:22 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:18.522 10:44:22 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.522 10:44:22 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:18.522 10:44:22 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:29:18.780 [2024-07-25 10:44:22.257911] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:29:18.780 [2024-07-25 10:44:22.257996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495408 ] 00:29:18.780 [2024-07-25 10:44:22.340373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:18.780 [2024-07-25 10:44:22.457975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.780 [2024-07-25 10:44:22.457978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.713 10:44:23 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:19.713 10:44:23 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:29:19.713 10:44:23 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:29:19.713 10:44:23 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:19.713 10:44:23 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:22.990 10:44:26 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:22.990 10:44:26 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:22.990 10:44:26 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:22.990 10:44:26 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:22.990 10:44:26 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:22.990 10:44:26 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:22.990 10:44:26 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:22.990 10:44:26 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:23.248 [ 00:29:23.248 { 00:29:23.248 "name": "Nvme0n1", 00:29:23.248 "aliases": [ 00:29:23.248 "e4d802b6-a89e-42a1-b5b9-32d9e76eea47" 00:29:23.248 ], 00:29:23.248 "product_name": "NVMe disk", 00:29:23.248 "block_size": 512, 00:29:23.248 "num_blocks": 1953525168, 00:29:23.248 "uuid": "e4d802b6-a89e-42a1-b5b9-32d9e76eea47", 00:29:23.248 "assigned_rate_limits": { 00:29:23.248 "rw_ios_per_sec": 0, 00:29:23.248 "rw_mbytes_per_sec": 0, 00:29:23.248 "r_mbytes_per_sec": 0, 00:29:23.248 "w_mbytes_per_sec": 0 00:29:23.248 }, 00:29:23.248 "claimed": false, 00:29:23.248 "zoned": false, 00:29:23.248 "supported_io_types": { 00:29:23.248 "read": true, 00:29:23.248 "write": true, 00:29:23.248 "unmap": true, 00:29:23.248 "flush": true, 00:29:23.248 "reset": true, 00:29:23.248 "nvme_admin": true, 00:29:23.248 "nvme_io": true, 00:29:23.248 "nvme_io_md": false, 00:29:23.248 "write_zeroes": true, 00:29:23.248 "zcopy": false, 00:29:23.248 "get_zone_info": false, 00:29:23.248 "zone_management": false, 00:29:23.248 "zone_append": false, 00:29:23.248 "compare": false, 00:29:23.248 "compare_and_write": false, 00:29:23.248 "abort": true, 00:29:23.248 "seek_hole": false, 00:29:23.248 "seek_data": false, 00:29:23.248 "copy": false, 00:29:23.248 "nvme_iov_md": false 00:29:23.248 }, 00:29:23.248 "driver_specific": { 00:29:23.248 "nvme": [ 00:29:23.248 { 00:29:23.248 "pci_address": "0000:0b:00.0", 00:29:23.248 "trid": { 00:29:23.248 "trtype": "PCIe", 00:29:23.248 "traddr": "0000:0b:00.0" 00:29:23.248 }, 00:29:23.248 "ctrlr_data": { 00:29:23.248 "cntlid": 0, 00:29:23.248 "vendor_id": "0x8086", 00:29:23.248 "model_number": "INTEL SSDPE2KX010T8", 00:29:23.248 "serial_number": "BTLJ72430F4Q1P0FGN", 00:29:23.248 "firmware_revision": "VDV10184", 00:29:23.248 "oacs": { 00:29:23.248 "security": 1, 00:29:23.248 "format": 1, 00:29:23.248 "firmware": 1, 00:29:23.248 "ns_manage": 1 00:29:23.248 }, 00:29:23.248 "multi_ctrlr": false, 00:29:23.248 "ana_reporting": false 00:29:23.248 }, 00:29:23.248 "vs": { 00:29:23.248 "nvme_version": "1.2" 00:29:23.248 }, 00:29:23.248 "ns_data": { 00:29:23.248 "id": 1, 00:29:23.248 "can_share": false 00:29:23.248 }, 00:29:23.248 "security": { 00:29:23.248 "opal": true 00:29:23.248 } 00:29:23.248 } 00:29:23.248 ], 00:29:23.248 "mp_policy": "active_passive" 00:29:23.248 } 00:29:23.248 } 00:29:23.248 ] 00:29:23.248 10:44:26 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:23.248 10:44:26 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:24.618 92639e71-e6f2-47d4-a1f3-341e7ef7424c 00:29:24.618 10:44:27 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:24.618 353e64f2-8313-4660-a0fc-b6c2f96c8ee5 00:29:24.618 10:44:28 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:24.618 10:44:28 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:24.618 10:44:28 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:24.618 10:44:28 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:24.618 10:44:28 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:24.618 10:44:28 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:24.618 10:44:28 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:24.875 10:44:28 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:25.133 [ 00:29:25.133 { 00:29:25.133 "name": "353e64f2-8313-4660-a0fc-b6c2f96c8ee5", 00:29:25.133 "aliases": [ 00:29:25.133 "lvs0/lv0" 00:29:25.133 ], 00:29:25.133 "product_name": "Logical Volume", 00:29:25.133 "block_size": 512, 00:29:25.133 "num_blocks": 204800, 00:29:25.133 "uuid": "353e64f2-8313-4660-a0fc-b6c2f96c8ee5", 00:29:25.133 "assigned_rate_limits": { 00:29:25.133 "rw_ios_per_sec": 0, 00:29:25.133 "rw_mbytes_per_sec": 0, 00:29:25.133 "r_mbytes_per_sec": 0, 00:29:25.133 "w_mbytes_per_sec": 0 00:29:25.133 }, 00:29:25.133 "claimed": false, 00:29:25.133 "zoned": false, 00:29:25.133 "supported_io_types": { 00:29:25.133 "read": true, 00:29:25.133 "write": true, 00:29:25.133 "unmap": true, 00:29:25.133 "flush": false, 00:29:25.133 "reset": true, 00:29:25.133 "nvme_admin": false, 00:29:25.133 "nvme_io": false, 00:29:25.133 "nvme_io_md": false, 00:29:25.133 "write_zeroes": true, 00:29:25.133 "zcopy": false, 00:29:25.133 "get_zone_info": false, 00:29:25.133 "zone_management": false, 00:29:25.133 "zone_append": false, 00:29:25.133 "compare": false, 00:29:25.133 "compare_and_write": false, 00:29:25.133 "abort": false, 00:29:25.133 "seek_hole": true, 00:29:25.133 "seek_data": true, 00:29:25.133 "copy": false, 00:29:25.133 "nvme_iov_md": false 00:29:25.133 }, 00:29:25.133 "driver_specific": { 00:29:25.133 "lvol": { 00:29:25.133 "lvol_store_uuid": "92639e71-e6f2-47d4-a1f3-341e7ef7424c", 00:29:25.133 "base_bdev": "Nvme0n1", 00:29:25.133 "thin_provision": true, 00:29:25.133 "num_allocated_clusters": 0, 00:29:25.133 "snapshot": false, 00:29:25.133 "clone": false, 00:29:25.133 "esnap_clone": false 00:29:25.133 } 00:29:25.133 } 00:29:25.133 } 00:29:25.133 ] 00:29:25.133 10:44:28 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:25.133 10:44:28 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:29:25.133 10:44:28 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:29:25.391 [2024-07-25 10:44:29.006518] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:25.391 COMP_lvs0/lv0 00:29:25.391 10:44:29 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:25.391 10:44:29 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:25.391 10:44:29 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:25.391 10:44:29 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:25.391 10:44:29 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:25.391 10:44:29 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:25.391 10:44:29 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:25.648 10:44:29 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:25.905 [ 00:29:25.905 { 00:29:25.905 "name": "COMP_lvs0/lv0", 00:29:25.905 "aliases": [ 00:29:25.905 "de1a4972-a748-5904-874c-2bc1f3c43e7a" 00:29:25.905 ], 00:29:25.905 "product_name": "compress", 00:29:25.905 "block_size": 4096, 00:29:25.905 "num_blocks": 25088, 00:29:25.905 "uuid": "de1a4972-a748-5904-874c-2bc1f3c43e7a", 00:29:25.905 "assigned_rate_limits": { 00:29:25.905 "rw_ios_per_sec": 0, 00:29:25.905 "rw_mbytes_per_sec": 0, 00:29:25.905 "r_mbytes_per_sec": 0, 00:29:25.905 "w_mbytes_per_sec": 0 00:29:25.905 }, 00:29:25.905 "claimed": false, 00:29:25.905 "zoned": false, 00:29:25.905 "supported_io_types": { 00:29:25.905 "read": true, 00:29:25.905 "write": true, 00:29:25.905 "unmap": false, 00:29:25.905 "flush": false, 00:29:25.905 "reset": false, 00:29:25.905 "nvme_admin": false, 00:29:25.905 "nvme_io": false, 00:29:25.905 "nvme_io_md": false, 00:29:25.905 "write_zeroes": true, 00:29:25.905 "zcopy": false, 00:29:25.905 "get_zone_info": false, 00:29:25.905 "zone_management": false, 00:29:25.905 "zone_append": false, 00:29:25.905 "compare": false, 00:29:25.905 "compare_and_write": false, 00:29:25.905 "abort": false, 00:29:25.905 "seek_hole": false, 00:29:25.905 "seek_data": false, 00:29:25.905 "copy": false, 00:29:25.905 "nvme_iov_md": false 00:29:25.905 }, 00:29:25.905 "driver_specific": { 00:29:25.905 "compress": { 00:29:25.905 "name": "COMP_lvs0/lv0", 00:29:25.905 "base_bdev_name": "353e64f2-8313-4660-a0fc-b6c2f96c8ee5", 00:29:25.905 "pm_path": "/tmp/pmem/e71ee44f-5e10-45a3-9477-6098d800bf4a" 00:29:25.905 } 00:29:25.905 } 00:29:25.905 } 00:29:25.905 ] 00:29:25.905 10:44:29 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:25.905 10:44:29 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:25.905 Running I/O for 3 seconds... 00:29:29.182 00:29:29.182 Latency(us) 00:29:29.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.182 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:29:29.182 Verification LBA range: start 0x0 length 0x3100 00:29:29.182 COMP_lvs0/lv0 : 3.01 2698.19 10.54 0.00 0.00 11817.73 85.71 18544.26 00:29:29.182 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:29:29.182 Verification LBA range: start 0x3100 length 0x3100 00:29:29.182 COMP_lvs0/lv0 : 3.01 2695.17 10.53 0.00 0.00 11824.08 84.95 16990.81 00:29:29.182 =================================================================================================================== 00:29:29.182 Total : 5393.36 21.07 0.00 0.00 11820.90 84.95 18544.26 00:29:29.182 0 00:29:29.182 10:44:32 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:29:29.182 10:44:32 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:29.439 10:44:32 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:29.696 10:44:33 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:29.696 10:44:33 compress_isal -- compress/compress.sh@78 -- # killprocess 2495408 00:29:29.696 10:44:33 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 2495408 ']' 00:29:29.696 10:44:33 compress_isal -- common/autotest_common.sh@954 -- # kill -0 2495408 00:29:29.696 10:44:33 compress_isal -- common/autotest_common.sh@955 -- # uname 00:29:29.696 10:44:33 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:29.696 10:44:33 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2495408 00:29:29.696 10:44:33 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:29.696 10:44:33 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:29.696 10:44:33 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2495408' 00:29:29.696 killing process with pid 2495408 00:29:29.696 10:44:33 compress_isal -- common/autotest_common.sh@969 -- # kill 2495408 00:29:29.696 Received shutdown signal, test time was about 3.000000 seconds 00:29:29.696 00:29:29.696 Latency(us) 00:29:29.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.696 =================================================================================================================== 00:29:29.696 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.696 10:44:33 compress_isal -- common/autotest_common.sh@974 -- # wait 2495408 00:29:31.067 10:44:34 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:29:31.067 10:44:34 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:29:31.067 10:44:34 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=2496885 00:29:31.067 10:44:34 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:29:31.067 10:44:34 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:31.067 10:44:34 compress_isal -- compress/compress.sh@57 -- # waitforlisten 2496885 00:29:31.067 10:44:34 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 2496885 ']' 00:29:31.067 10:44:34 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.067 10:44:34 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:31.067 10:44:34 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.067 10:44:34 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:31.067 10:44:34 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:29:31.325 [2024-07-25 10:44:34.817648] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:29:31.325 [2024-07-25 10:44:34.817722] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496885 ] 00:29:31.325 [2024-07-25 10:44:34.892277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:31.325 [2024-07-25 10:44:35.002494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.325 [2024-07-25 10:44:35.002551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.325 [2024-07-25 10:44:35.002554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.255 10:44:35 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:32.255 10:44:35 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:29:32.255 10:44:35 compress_isal -- compress/compress.sh@58 -- # create_vols 00:29:32.255 10:44:35 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:32.255 10:44:35 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:35.532 10:44:38 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:35.532 10:44:38 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:35.532 10:44:38 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:35.532 10:44:38 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:35.532 10:44:38 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:35.532 10:44:38 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:35.532 10:44:38 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:35.532 10:44:39 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:35.788 [ 00:29:35.789 { 00:29:35.789 "name": "Nvme0n1", 00:29:35.789 "aliases": [ 00:29:35.789 "314aceb9-0511-4b89-83d4-db0ec02ed86b" 00:29:35.789 ], 00:29:35.789 "product_name": "NVMe disk", 00:29:35.789 "block_size": 512, 00:29:35.789 "num_blocks": 1953525168, 00:29:35.789 "uuid": "314aceb9-0511-4b89-83d4-db0ec02ed86b", 00:29:35.789 "assigned_rate_limits": { 00:29:35.789 "rw_ios_per_sec": 0, 00:29:35.789 "rw_mbytes_per_sec": 0, 00:29:35.789 "r_mbytes_per_sec": 0, 00:29:35.789 "w_mbytes_per_sec": 0 00:29:35.789 }, 00:29:35.789 "claimed": false, 00:29:35.789 "zoned": false, 00:29:35.789 "supported_io_types": { 00:29:35.789 "read": true, 00:29:35.789 "write": true, 00:29:35.789 "unmap": true, 00:29:35.789 "flush": true, 00:29:35.789 "reset": true, 00:29:35.789 "nvme_admin": true, 00:29:35.789 "nvme_io": true, 00:29:35.789 "nvme_io_md": false, 00:29:35.789 "write_zeroes": true, 00:29:35.789 "zcopy": false, 00:29:35.789 "get_zone_info": false, 00:29:35.789 "zone_management": false, 00:29:35.789 "zone_append": false, 00:29:35.789 "compare": false, 00:29:35.789 "compare_and_write": false, 00:29:35.789 "abort": true, 00:29:35.789 "seek_hole": false, 00:29:35.789 "seek_data": false, 00:29:35.789 "copy": false, 00:29:35.789 "nvme_iov_md": false 00:29:35.789 }, 00:29:35.789 "driver_specific": { 00:29:35.789 "nvme": [ 00:29:35.789 { 00:29:35.789 "pci_address": "0000:0b:00.0", 00:29:35.789 "trid": { 00:29:35.789 "trtype": "PCIe", 00:29:35.789 "traddr": "0000:0b:00.0" 00:29:35.789 }, 00:29:35.789 "ctrlr_data": { 00:29:35.789 "cntlid": 0, 00:29:35.789 "vendor_id": "0x8086", 00:29:35.789 "model_number": "INTEL SSDPE2KX010T8", 00:29:35.789 "serial_number": "BTLJ72430F4Q1P0FGN", 00:29:35.789 "firmware_revision": "VDV10184", 00:29:35.789 "oacs": { 00:29:35.789 "security": 1, 00:29:35.789 "format": 1, 00:29:35.789 "firmware": 1, 00:29:35.789 "ns_manage": 1 00:29:35.789 }, 00:29:35.789 "multi_ctrlr": false, 00:29:35.789 "ana_reporting": false 00:29:35.789 }, 00:29:35.789 "vs": { 00:29:35.789 "nvme_version": "1.2" 00:29:35.789 }, 00:29:35.789 "ns_data": { 00:29:35.789 "id": 1, 00:29:35.789 "can_share": false 00:29:35.789 }, 00:29:35.789 "security": { 00:29:35.789 "opal": true 00:29:35.789 } 00:29:35.789 } 00:29:35.789 ], 00:29:35.789 "mp_policy": "active_passive" 00:29:35.789 } 00:29:35.789 } 00:29:35.789 ] 00:29:35.789 10:44:39 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:35.789 10:44:39 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:37.159 7ea6b7ec-b630-420e-a694-b858fff76124 00:29:37.159 10:44:40 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:37.159 2b28a287-436d-4236-9792-868bc395b334 00:29:37.159 10:44:40 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:37.159 10:44:40 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:37.159 10:44:40 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:37.159 10:44:40 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:37.159 10:44:40 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:37.159 10:44:40 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:37.159 10:44:40 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:37.429 10:44:41 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:38.045 [ 00:29:38.045 { 00:29:38.045 "name": "2b28a287-436d-4236-9792-868bc395b334", 00:29:38.045 "aliases": [ 00:29:38.045 "lvs0/lv0" 00:29:38.045 ], 00:29:38.045 "product_name": "Logical Volume", 00:29:38.045 "block_size": 512, 00:29:38.045 "num_blocks": 204800, 00:29:38.045 "uuid": "2b28a287-436d-4236-9792-868bc395b334", 00:29:38.045 "assigned_rate_limits": { 00:29:38.045 "rw_ios_per_sec": 0, 00:29:38.045 "rw_mbytes_per_sec": 0, 00:29:38.045 "r_mbytes_per_sec": 0, 00:29:38.045 "w_mbytes_per_sec": 0 00:29:38.045 }, 00:29:38.045 "claimed": false, 00:29:38.045 "zoned": false, 00:29:38.045 "supported_io_types": { 00:29:38.045 "read": true, 00:29:38.045 "write": true, 00:29:38.045 "unmap": true, 00:29:38.045 "flush": false, 00:29:38.045 "reset": true, 00:29:38.045 "nvme_admin": false, 00:29:38.045 "nvme_io": false, 00:29:38.045 "nvme_io_md": false, 00:29:38.045 "write_zeroes": true, 00:29:38.045 "zcopy": false, 00:29:38.045 "get_zone_info": false, 00:29:38.045 "zone_management": false, 00:29:38.045 "zone_append": false, 00:29:38.045 "compare": false, 00:29:38.045 "compare_and_write": false, 00:29:38.045 "abort": false, 00:29:38.045 "seek_hole": true, 00:29:38.045 "seek_data": true, 00:29:38.045 "copy": false, 00:29:38.045 "nvme_iov_md": false 00:29:38.045 }, 00:29:38.045 "driver_specific": { 00:29:38.045 "lvol": { 00:29:38.045 "lvol_store_uuid": "7ea6b7ec-b630-420e-a694-b858fff76124", 00:29:38.045 "base_bdev": "Nvme0n1", 00:29:38.045 "thin_provision": true, 00:29:38.045 "num_allocated_clusters": 0, 00:29:38.045 "snapshot": false, 00:29:38.045 "clone": false, 00:29:38.045 "esnap_clone": false 00:29:38.045 } 00:29:38.045 } 00:29:38.045 } 00:29:38.045 ] 00:29:38.045 10:44:41 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:38.045 10:44:41 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:29:38.045 10:44:41 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:29:38.045 [2024-07-25 10:44:41.667414] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:38.045 COMP_lvs0/lv0 00:29:38.045 10:44:41 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:38.045 10:44:41 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:38.045 10:44:41 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:38.045 10:44:41 compress_isal -- common/autotest_common.sh@901 -- # local i 00:29:38.045 10:44:41 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:38.045 10:44:41 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:38.045 10:44:41 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:38.302 10:44:41 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:38.559 [ 00:29:38.559 { 00:29:38.559 "name": "COMP_lvs0/lv0", 00:29:38.559 "aliases": [ 00:29:38.559 "711101b2-70eb-55d4-8b2b-fc4f881bdaec" 00:29:38.559 ], 00:29:38.559 "product_name": "compress", 00:29:38.559 "block_size": 512, 00:29:38.559 "num_blocks": 200704, 00:29:38.559 "uuid": "711101b2-70eb-55d4-8b2b-fc4f881bdaec", 00:29:38.559 "assigned_rate_limits": { 00:29:38.559 "rw_ios_per_sec": 0, 00:29:38.559 "rw_mbytes_per_sec": 0, 00:29:38.559 "r_mbytes_per_sec": 0, 00:29:38.559 "w_mbytes_per_sec": 0 00:29:38.559 }, 00:29:38.559 "claimed": false, 00:29:38.559 "zoned": false, 00:29:38.559 "supported_io_types": { 00:29:38.559 "read": true, 00:29:38.559 "write": true, 00:29:38.559 "unmap": false, 00:29:38.559 "flush": false, 00:29:38.559 "reset": false, 00:29:38.559 "nvme_admin": false, 00:29:38.559 "nvme_io": false, 00:29:38.559 "nvme_io_md": false, 00:29:38.559 "write_zeroes": true, 00:29:38.559 "zcopy": false, 00:29:38.559 "get_zone_info": false, 00:29:38.559 "zone_management": false, 00:29:38.559 "zone_append": false, 00:29:38.559 "compare": false, 00:29:38.559 "compare_and_write": false, 00:29:38.559 "abort": false, 00:29:38.559 "seek_hole": false, 00:29:38.559 "seek_data": false, 00:29:38.559 "copy": false, 00:29:38.559 "nvme_iov_md": false 00:29:38.559 }, 00:29:38.559 "driver_specific": { 00:29:38.559 "compress": { 00:29:38.559 "name": "COMP_lvs0/lv0", 00:29:38.559 "base_bdev_name": "2b28a287-436d-4236-9792-868bc395b334", 00:29:38.559 "pm_path": "/tmp/pmem/cce85186-bf58-4e84-8201-a5a33c5bc933" 00:29:38.559 } 00:29:38.559 } 00:29:38.559 } 00:29:38.559 ] 00:29:38.559 10:44:42 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:29:38.559 10:44:42 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:38.816 I/O targets: 00:29:38.816 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:29:38.817 00:29:38.817 00:29:38.817 CUnit - A unit testing framework for C - Version 2.1-3 00:29:38.817 http://cunit.sourceforge.net/ 00:29:38.817 00:29:38.817 00:29:38.817 Suite: bdevio tests on: COMP_lvs0/lv0 00:29:38.817 Test: blockdev write read block ...passed 00:29:38.817 Test: blockdev write zeroes read block ...passed 00:29:38.817 Test: blockdev write zeroes read no split ...passed 00:29:38.817 Test: blockdev write zeroes read split ...passed 00:29:38.817 Test: blockdev write zeroes read split partial ...passed 00:29:38.817 Test: blockdev reset ...[2024-07-25 10:44:42.398408] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:29:38.817 passed 00:29:38.817 Test: blockdev write read 8 blocks ...passed 00:29:38.817 Test: blockdev write read size > 128k ...passed 00:29:38.817 Test: blockdev write read invalid size ...passed 00:29:38.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:38.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:38.817 Test: blockdev write read max offset ...passed 00:29:38.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:38.817 Test: blockdev writev readv 8 blocks ...passed 00:29:38.817 Test: blockdev writev readv 30 x 1block ...passed 00:29:38.817 Test: blockdev writev readv block ...passed 00:29:38.817 Test: blockdev writev readv size > 128k ...passed 00:29:38.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:38.817 Test: blockdev comparev and writev ...passed 00:29:38.817 Test: blockdev nvme passthru rw ...passed 00:29:38.817 Test: blockdev nvme passthru vendor specific ...passed 00:29:38.817 Test: blockdev nvme admin passthru ...passed 00:29:38.817 Test: blockdev copy ...passed 00:29:38.817 00:29:38.817 Run Summary: Type Total Ran Passed Failed Inactive 00:29:38.817 suites 1 1 n/a 0 0 00:29:38.817 tests 23 23 23 0 0 00:29:38.817 asserts 130 130 130 0 n/a 00:29:38.817 00:29:38.817 Elapsed time = 0.201 seconds 00:29:38.817 0 00:29:38.817 10:44:42 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:29:38.817 10:44:42 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:39.074 10:44:42 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:39.331 10:44:42 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:29:39.331 10:44:42 compress_isal -- compress/compress.sh@62 -- # killprocess 2496885 00:29:39.331 10:44:42 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 2496885 ']' 00:29:39.331 10:44:42 compress_isal -- common/autotest_common.sh@954 -- # kill -0 2496885 00:29:39.331 10:44:42 compress_isal -- common/autotest_common.sh@955 -- # uname 00:29:39.331 10:44:42 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:39.331 10:44:42 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2496885 00:29:39.331 10:44:42 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:39.331 10:44:42 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:39.331 10:44:42 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2496885' 00:29:39.331 killing process with pid 2496885 00:29:39.331 10:44:42 compress_isal -- common/autotest_common.sh@969 -- # kill 2496885 00:29:39.331 10:44:42 compress_isal -- common/autotest_common.sh@974 -- # wait 2496885 00:29:41.225 10:44:44 compress_isal -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:29:41.225 10:44:44 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:29:41.225 00:29:41.225 real 0m46.829s 00:29:41.225 user 1m49.156s 00:29:41.225 sys 0m3.484s 00:29:41.225 10:44:44 compress_isal -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:41.225 10:44:44 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:29:41.225 ************************************ 00:29:41.225 END TEST compress_isal 00:29:41.225 ************************************ 00:29:41.225 10:44:44 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:41.225 10:44:44 -- spdk/autotest.sh@360 -- # '[' 1 -eq 1 ']' 00:29:41.225 10:44:44 -- spdk/autotest.sh@361 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:29:41.225 10:44:44 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:41.225 10:44:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:41.225 10:44:44 -- common/autotest_common.sh@10 -- # set +x 00:29:41.225 ************************************ 00:29:41.225 START TEST blockdev_crypto_aesni 00:29:41.225 ************************************ 00:29:41.225 10:44:44 blockdev_crypto_aesni -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:29:41.225 * Looking for test storage... 00:29:41.225 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:29:41.225 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:29:41.225 10:44:44 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:29:41.225 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # uname -s 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@681 -- # test_type=crypto_aesni 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # crypto_device= 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # dek= 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # env_ctx= 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == bdev ]] 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == crypto_* ]] 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2498139 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:41.226 10:44:44 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 2498139 00:29:41.226 10:44:44 blockdev_crypto_aesni -- common/autotest_common.sh@831 -- # '[' -z 2498139 ']' 00:29:41.226 10:44:44 blockdev_crypto_aesni -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.226 10:44:44 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:41.226 10:44:44 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.226 10:44:44 blockdev_crypto_aesni -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:41.226 10:44:44 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:41.226 [2024-07-25 10:44:44.687289] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:29:41.226 [2024-07-25 10:44:44.687367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498139 ] 00:29:41.226 [2024-07-25 10:44:44.761687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.226 [2024-07-25 10:44:44.878387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.159 10:44:45 blockdev_crypto_aesni -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:42.159 10:44:45 blockdev_crypto_aesni -- common/autotest_common.sh@864 -- # return 0 00:29:42.159 10:44:45 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:29:42.159 10:44:45 blockdev_crypto_aesni -- bdev/blockdev.sh@704 -- # setup_crypto_aesni_conf 00:29:42.159 10:44:45 blockdev_crypto_aesni -- bdev/blockdev.sh@145 -- # rpc_cmd 00:29:42.159 10:44:45 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.159 10:44:45 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:42.159 [2024-07-25 10:44:45.672893] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:29:42.159 [2024-07-25 10:44:45.680919] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:29:42.159 [2024-07-25 10:44:45.688934] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:29:42.159 [2024-07-25 10:44:45.771307] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:29:44.687 true 00:29:44.687 true 00:29:44.687 true 00:29:44.687 true 00:29:44.687 Malloc0 00:29:44.687 Malloc1 00:29:44.687 Malloc2 00:29:44.687 Malloc3 00:29:44.687 [2024-07-25 10:44:48.267975] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:29:44.687 crypto_ram 00:29:44.687 [2024-07-25 10:44:48.275972] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:29:44.687 crypto_ram2 00:29:44.687 [2024-07-25 10:44:48.283994] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:29:44.687 crypto_ram3 00:29:44.687 [2024-07-25 10:44:48.292017] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:29:44.687 crypto_ram4 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.687 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.687 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # cat 00:29:44.687 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.687 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.687 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.687 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:29:44.687 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:29:44.687 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.687 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:44.944 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.944 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:29:44.945 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "d73a95fd-99d5-56ec-9187-3a34ca7e5473"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d73a95fd-99d5-56ec-9187-3a34ca7e5473",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "089bf40b-5678-5a31-8225-9c70c10cff64"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "089bf40b-5678-5a31-8225-9c70c10cff64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "2450a284-2bc2-57d3-ade5-f5b8fe4e906d"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "2450a284-2bc2-57d3-ade5-f5b8fe4e906d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "40190012-ace0-5c56-8469-7bd77eed14d2"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "40190012-ace0-5c56-8469-7bd77eed14d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:29:44.945 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r .name 00:29:44.945 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:29:44.945 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:29:44.945 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:29:44.945 10:44:48 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # killprocess 2498139 00:29:44.945 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@950 -- # '[' -z 2498139 ']' 00:29:44.945 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # kill -0 2498139 00:29:44.945 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # uname 00:29:44.945 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:44.945 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2498139 00:29:44.945 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:44.945 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:44.945 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2498139' 00:29:44.945 killing process with pid 2498139 00:29:44.945 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@969 -- # kill 2498139 00:29:44.945 10:44:48 blockdev_crypto_aesni -- common/autotest_common.sh@974 -- # wait 2498139 00:29:45.509 10:44:49 blockdev_crypto_aesni -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:45.509 10:44:49 blockdev_crypto_aesni -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:29:45.509 10:44:49 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:45.509 10:44:49 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:45.509 10:44:49 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:45.509 ************************************ 00:29:45.509 START TEST bdev_hello_world 00:29:45.509 ************************************ 00:29:45.509 10:44:49 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:29:45.766 [2024-07-25 10:44:49.238154] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:29:45.766 [2024-07-25 10:44:49.238235] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498684 ] 00:29:45.766 [2024-07-25 10:44:49.317227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.766 [2024-07-25 10:44:49.435324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.766 [2024-07-25 10:44:49.456599] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:29:45.766 [2024-07-25 10:44:49.464625] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:29:45.766 [2024-07-25 10:44:49.472644] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:29:46.023 [2024-07-25 10:44:49.585471] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:29:48.549 [2024-07-25 10:44:51.862215] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:29:48.549 [2024-07-25 10:44:51.862300] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:29:48.549 [2024-07-25 10:44:51.862317] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:48.549 [2024-07-25 10:44:51.870234] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:29:48.549 [2024-07-25 10:44:51.870260] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:29:48.549 [2024-07-25 10:44:51.870272] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:48.549 [2024-07-25 10:44:51.878251] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:29:48.549 [2024-07-25 10:44:51.878273] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:29:48.549 [2024-07-25 10:44:51.878301] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:48.549 [2024-07-25 10:44:51.886272] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:29:48.549 [2024-07-25 10:44:51.886295] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:29:48.549 [2024-07-25 10:44:51.886321] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:48.549 [2024-07-25 10:44:51.966654] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:48.549 [2024-07-25 10:44:51.966700] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:29:48.549 [2024-07-25 10:44:51.966733] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:48.549 [2024-07-25 10:44:51.967897] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:48.549 [2024-07-25 10:44:51.967981] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:48.549 [2024-07-25 10:44:51.968009] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:48.549 [2024-07-25 10:44:51.968064] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:48.549 00:29:48.549 [2024-07-25 10:44:51.968089] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:48.806 00:29:48.806 real 0m3.278s 00:29:48.806 user 0m2.846s 00:29:48.806 sys 0m0.400s 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:48.806 ************************************ 00:29:48.806 END TEST bdev_hello_world 00:29:48.806 ************************************ 00:29:48.806 10:44:52 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:29:48.806 10:44:52 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:48.806 10:44:52 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:48.806 10:44:52 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:48.806 ************************************ 00:29:48.806 START TEST bdev_bounds 00:29:48.806 ************************************ 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=2499099 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 2499099' 00:29:48.806 Process bdevio pid: 2499099 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 2499099 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 2499099 ']' 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:48.806 10:44:52 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:49.063 [2024-07-25 10:44:52.560426] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:29:49.063 [2024-07-25 10:44:52.560511] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499099 ] 00:29:49.063 [2024-07-25 10:44:52.641404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:49.063 [2024-07-25 10:44:52.755616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.063 [2024-07-25 10:44:52.755681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.063 [2024-07-25 10:44:52.755684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.320 [2024-07-25 10:44:52.777020] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:29:49.320 [2024-07-25 10:44:52.785046] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:29:49.320 [2024-07-25 10:44:52.793065] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:29:49.320 [2024-07-25 10:44:52.906733] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:29:51.844 [2024-07-25 10:44:55.178034] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:29:51.844 [2024-07-25 10:44:55.178139] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:29:51.844 [2024-07-25 10:44:55.178175] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:51.844 [2024-07-25 10:44:55.186051] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:29:51.844 [2024-07-25 10:44:55.186090] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:29:51.844 [2024-07-25 10:44:55.186112] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:51.844 [2024-07-25 10:44:55.194073] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:29:51.844 [2024-07-25 10:44:55.194130] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:29:51.844 [2024-07-25 10:44:55.194143] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:51.844 [2024-07-25 10:44:55.202117] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:29:51.844 [2024-07-25 10:44:55.202141] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:29:51.844 [2024-07-25 10:44:55.202169] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:51.844 10:44:55 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:51.844 10:44:55 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:29:51.844 10:44:55 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:51.844 I/O targets: 00:29:51.844 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:29:51.844 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:29:51.844 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:29:51.844 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:29:51.844 00:29:51.844 00:29:51.844 CUnit - A unit testing framework for C - Version 2.1-3 00:29:51.844 http://cunit.sourceforge.net/ 00:29:51.844 00:29:51.844 00:29:51.844 Suite: bdevio tests on: crypto_ram4 00:29:51.844 Test: blockdev write read block ...passed 00:29:51.844 Test: blockdev write zeroes read block ...passed 00:29:51.844 Test: blockdev write zeroes read no split ...passed 00:29:51.844 Test: blockdev write zeroes read split ...passed 00:29:51.844 Test: blockdev write zeroes read split partial ...passed 00:29:51.844 Test: blockdev reset ...passed 00:29:51.844 Test: blockdev write read 8 blocks ...passed 00:29:51.844 Test: blockdev write read size > 128k ...passed 00:29:51.844 Test: blockdev write read invalid size ...passed 00:29:51.844 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:51.844 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:51.844 Test: blockdev write read max offset ...passed 00:29:51.844 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:51.844 Test: blockdev writev readv 8 blocks ...passed 00:29:51.844 Test: blockdev writev readv 30 x 1block ...passed 00:29:51.844 Test: blockdev writev readv block ...passed 00:29:51.844 Test: blockdev writev readv size > 128k ...passed 00:29:51.844 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:51.844 Test: blockdev comparev and writev ...passed 00:29:51.844 Test: blockdev nvme passthru rw ...passed 00:29:51.844 Test: blockdev nvme passthru vendor specific ...passed 00:29:51.844 Test: blockdev nvme admin passthru ...passed 00:29:51.844 Test: blockdev copy ...passed 00:29:51.844 Suite: bdevio tests on: crypto_ram3 00:29:51.844 Test: blockdev write read block ...passed 00:29:51.844 Test: blockdev write zeroes read block ...passed 00:29:51.844 Test: blockdev write zeroes read no split ...passed 00:29:51.844 Test: blockdev write zeroes read split ...passed 00:29:51.844 Test: blockdev write zeroes read split partial ...passed 00:29:51.844 Test: blockdev reset ...passed 00:29:51.844 Test: blockdev write read 8 blocks ...passed 00:29:51.844 Test: blockdev write read size > 128k ...passed 00:29:51.844 Test: blockdev write read invalid size ...passed 00:29:51.844 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:51.844 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:51.844 Test: blockdev write read max offset ...passed 00:29:51.844 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:51.844 Test: blockdev writev readv 8 blocks ...passed 00:29:51.844 Test: blockdev writev readv 30 x 1block ...passed 00:29:51.844 Test: blockdev writev readv block ...passed 00:29:51.844 Test: blockdev writev readv size > 128k ...passed 00:29:51.844 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:51.844 Test: blockdev comparev and writev ...passed 00:29:51.844 Test: blockdev nvme passthru rw ...passed 00:29:51.844 Test: blockdev nvme passthru vendor specific ...passed 00:29:51.844 Test: blockdev nvme admin passthru ...passed 00:29:51.844 Test: blockdev copy ...passed 00:29:51.844 Suite: bdevio tests on: crypto_ram2 00:29:51.844 Test: blockdev write read block ...passed 00:29:51.844 Test: blockdev write zeroes read block ...passed 00:29:51.844 Test: blockdev write zeroes read no split ...passed 00:29:52.102 Test: blockdev write zeroes read split ...passed 00:29:52.102 Test: blockdev write zeroes read split partial ...passed 00:29:52.102 Test: blockdev reset ...passed 00:29:52.102 Test: blockdev write read 8 blocks ...passed 00:29:52.102 Test: blockdev write read size > 128k ...passed 00:29:52.102 Test: blockdev write read invalid size ...passed 00:29:52.102 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:52.102 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:52.102 Test: blockdev write read max offset ...passed 00:29:52.102 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:52.102 Test: blockdev writev readv 8 blocks ...passed 00:29:52.102 Test: blockdev writev readv 30 x 1block ...passed 00:29:52.102 Test: blockdev writev readv block ...passed 00:29:52.102 Test: blockdev writev readv size > 128k ...passed 00:29:52.102 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:52.102 Test: blockdev comparev and writev ...passed 00:29:52.102 Test: blockdev nvme passthru rw ...passed 00:29:52.102 Test: blockdev nvme passthru vendor specific ...passed 00:29:52.102 Test: blockdev nvme admin passthru ...passed 00:29:52.102 Test: blockdev copy ...passed 00:29:52.102 Suite: bdevio tests on: crypto_ram 00:29:52.102 Test: blockdev write read block ...passed 00:29:52.102 Test: blockdev write zeroes read block ...passed 00:29:52.102 Test: blockdev write zeroes read no split ...passed 00:29:52.102 Test: blockdev write zeroes read split ...passed 00:29:52.102 Test: blockdev write zeroes read split partial ...passed 00:29:52.102 Test: blockdev reset ...passed 00:29:52.102 Test: blockdev write read 8 blocks ...passed 00:29:52.102 Test: blockdev write read size > 128k ...passed 00:29:52.102 Test: blockdev write read invalid size ...passed 00:29:52.102 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:52.102 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:52.102 Test: blockdev write read max offset ...passed 00:29:52.102 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:52.102 Test: blockdev writev readv 8 blocks ...passed 00:29:52.102 Test: blockdev writev readv 30 x 1block ...passed 00:29:52.102 Test: blockdev writev readv block ...passed 00:29:52.102 Test: blockdev writev readv size > 128k ...passed 00:29:52.102 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:52.102 Test: blockdev comparev and writev ...passed 00:29:52.102 Test: blockdev nvme passthru rw ...passed 00:29:52.102 Test: blockdev nvme passthru vendor specific ...passed 00:29:52.102 Test: blockdev nvme admin passthru ...passed 00:29:52.102 Test: blockdev copy ...passed 00:29:52.102 00:29:52.102 Run Summary: Type Total Ran Passed Failed Inactive 00:29:52.102 suites 4 4 n/a 0 0 00:29:52.102 tests 92 92 92 0 0 00:29:52.102 asserts 520 520 520 0 n/a 00:29:52.102 00:29:52.102 Elapsed time = 0.669 seconds 00:29:52.102 0 00:29:52.102 10:44:55 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 2499099 00:29:52.102 10:44:55 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 2499099 ']' 00:29:52.102 10:44:55 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 2499099 00:29:52.102 10:44:55 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:29:52.102 10:44:55 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.102 10:44:55 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2499099 00:29:52.102 10:44:55 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:52.102 10:44:55 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:52.102 10:44:55 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2499099' 00:29:52.102 killing process with pid 2499099 00:29:52.102 10:44:55 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@969 -- # kill 2499099 00:29:52.102 10:44:55 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@974 -- # wait 2499099 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:29:52.667 00:29:52.667 real 0m3.772s 00:29:52.667 user 0m10.591s 00:29:52.667 sys 0m0.575s 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:52.667 ************************************ 00:29:52.667 END TEST bdev_bounds 00:29:52.667 ************************************ 00:29:52.667 10:44:56 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:29:52.667 10:44:56 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:52.667 10:44:56 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:52.667 10:44:56 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:52.667 ************************************ 00:29:52.667 START TEST bdev_nbd 00:29:52.667 ************************************ 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=2499529 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 2499529 /var/tmp/spdk-nbd.sock 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 2499529 ']' 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:52.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:52.667 10:44:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:52.924 [2024-07-25 10:44:56.386371] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:29:52.924 [2024-07-25 10:44:56.386453] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.924 [2024-07-25 10:44:56.464539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.924 [2024-07-25 10:44:56.575544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.924 [2024-07-25 10:44:56.596796] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:29:52.924 [2024-07-25 10:44:56.604815] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:29:52.924 [2024-07-25 10:44:56.612833] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:29:53.182 [2024-07-25 10:44:56.731445] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:29:55.709 [2024-07-25 10:44:59.014745] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:29:55.709 [2024-07-25 10:44:59.014837] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:29:55.709 [2024-07-25 10:44:59.014854] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:55.709 [2024-07-25 10:44:59.022761] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:29:55.709 [2024-07-25 10:44:59.022786] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:29:55.709 [2024-07-25 10:44:59.022813] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:55.709 [2024-07-25 10:44:59.030782] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:29:55.709 [2024-07-25 10:44:59.030804] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:29:55.709 [2024-07-25 10:44:59.030831] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:55.709 [2024-07-25 10:44:59.038802] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:29:55.709 [2024-07-25 10:44:59.038823] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:29:55.709 [2024-07-25 10:44:59.038850] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:55.709 1+0 records in 00:29:55.709 1+0 records out 00:29:55.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217126 s, 18.9 MB/s 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:29:55.709 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:29:55.967 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:29:55.967 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:29:55.967 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:29:55.967 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:55.967 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:55.967 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:55.967 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:55.967 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:55.967 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:55.967 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:55.967 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:55.967 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:56.226 1+0 records in 00:29:56.226 1+0 records out 00:29:56.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200795 s, 20.4 MB/s 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:56.226 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:29:56.484 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:56.484 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:56.484 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:56.484 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:56.484 1+0 records in 00:29:56.484 1+0 records out 00:29:56.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218702 s, 18.7 MB/s 00:29:56.484 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:56.484 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:56.484 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:56.484 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:56.484 10:44:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:56.484 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:56.484 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:29:56.485 10:44:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:56.743 1+0 records in 00:29:56.743 1+0 records out 00:29:56.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255865 s, 16.0 MB/s 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:29:56.743 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:57.001 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:57.001 { 00:29:57.001 "nbd_device": "/dev/nbd0", 00:29:57.001 "bdev_name": "crypto_ram" 00:29:57.001 }, 00:29:57.001 { 00:29:57.001 "nbd_device": "/dev/nbd1", 00:29:57.001 "bdev_name": "crypto_ram2" 00:29:57.001 }, 00:29:57.001 { 00:29:57.001 "nbd_device": "/dev/nbd2", 00:29:57.001 "bdev_name": "crypto_ram3" 00:29:57.001 }, 00:29:57.001 { 00:29:57.001 "nbd_device": "/dev/nbd3", 00:29:57.001 "bdev_name": "crypto_ram4" 00:29:57.001 } 00:29:57.001 ]' 00:29:57.001 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:57.001 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:57.001 { 00:29:57.001 "nbd_device": "/dev/nbd0", 00:29:57.001 "bdev_name": "crypto_ram" 00:29:57.001 }, 00:29:57.001 { 00:29:57.001 "nbd_device": "/dev/nbd1", 00:29:57.001 "bdev_name": "crypto_ram2" 00:29:57.001 }, 00:29:57.001 { 00:29:57.001 "nbd_device": "/dev/nbd2", 00:29:57.001 "bdev_name": "crypto_ram3" 00:29:57.001 }, 00:29:57.001 { 00:29:57.001 "nbd_device": "/dev/nbd3", 00:29:57.001 "bdev_name": "crypto_ram4" 00:29:57.001 } 00:29:57.001 ]' 00:29:57.001 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:57.001 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:29:57.001 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:57.001 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:29:57.001 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:57.001 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:57.001 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:57.001 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:57.259 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:57.259 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:57.259 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:57.259 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:57.259 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:57.259 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:57.259 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:57.259 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:57.259 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:57.259 10:45:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:57.515 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:57.515 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:57.515 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:57.515 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:57.515 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:57.515 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:57.515 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:57.515 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:57.515 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:57.515 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:29:57.772 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:29:57.772 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:29:57.772 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:29:57.772 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:57.772 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:57.772 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:29:57.772 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:57.772 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:57.772 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:57.772 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:29:58.030 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:29:58.030 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:29:58.030 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:29:58.030 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:58.030 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:58.030 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:29:58.030 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:58.030 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:58.030 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:58.030 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:58.030 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:29:58.288 10:45:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:29:58.550 /dev/nbd0 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:58.550 1+0 records in 00:29:58.550 1+0 records out 00:29:58.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212185 s, 19.3 MB/s 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:29:58.550 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:29:58.836 /dev/nbd1 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:58.836 1+0 records in 00:29:58.836 1+0 records out 00:29:58.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028239 s, 14.5 MB/s 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:29:58.836 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:29:59.103 /dev/nbd10 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:59.103 1+0 records in 00:29:59.103 1+0 records out 00:29:59.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026086 s, 15.7 MB/s 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:29:59.103 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:29:59.361 /dev/nbd11 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:59.361 1+0 records in 00:29:59.361 1+0 records out 00:29:59.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280991 s, 14.6 MB/s 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:59.361 10:45:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:59.619 { 00:29:59.619 "nbd_device": "/dev/nbd0", 00:29:59.619 "bdev_name": "crypto_ram" 00:29:59.619 }, 00:29:59.619 { 00:29:59.619 "nbd_device": "/dev/nbd1", 00:29:59.619 "bdev_name": "crypto_ram2" 00:29:59.619 }, 00:29:59.619 { 00:29:59.619 "nbd_device": "/dev/nbd10", 00:29:59.619 "bdev_name": "crypto_ram3" 00:29:59.619 }, 00:29:59.619 { 00:29:59.619 "nbd_device": "/dev/nbd11", 00:29:59.619 "bdev_name": "crypto_ram4" 00:29:59.619 } 00:29:59.619 ]' 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:59.619 { 00:29:59.619 "nbd_device": "/dev/nbd0", 00:29:59.619 "bdev_name": "crypto_ram" 00:29:59.619 }, 00:29:59.619 { 00:29:59.619 "nbd_device": "/dev/nbd1", 00:29:59.619 "bdev_name": "crypto_ram2" 00:29:59.619 }, 00:29:59.619 { 00:29:59.619 "nbd_device": "/dev/nbd10", 00:29:59.619 "bdev_name": "crypto_ram3" 00:29:59.619 }, 00:29:59.619 { 00:29:59.619 "nbd_device": "/dev/nbd11", 00:29:59.619 "bdev_name": "crypto_ram4" 00:29:59.619 } 00:29:59.619 ]' 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:59.619 /dev/nbd1 00:29:59.619 /dev/nbd10 00:29:59.619 /dev/nbd11' 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:59.619 /dev/nbd1 00:29:59.619 /dev/nbd10 00:29:59.619 /dev/nbd11' 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:59.619 256+0 records in 00:29:59.619 256+0 records out 00:29:59.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495748 s, 212 MB/s 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:59.619 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:59.887 256+0 records in 00:29:59.887 256+0 records out 00:29:59.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0536618 s, 19.5 MB/s 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:59.887 256+0 records in 00:29:59.887 256+0 records out 00:29:59.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0533298 s, 19.7 MB/s 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:29:59.887 256+0 records in 00:29:59.887 256+0 records out 00:29:59.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.054178 s, 19.4 MB/s 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:29:59.887 256+0 records in 00:29:59.887 256+0 records out 00:29:59.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0476069 s, 22.0 MB/s 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:59.887 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:59.888 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:00.151 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:00.151 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:00.151 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:00.151 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:00.151 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:00.151 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:00.151 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:00.151 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:00.151 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:00.151 10:45:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:00.408 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:00.408 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:00.408 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:00.408 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:00.408 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:00.408 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:00.408 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:00.408 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:00.408 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:00.408 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:30:00.666 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:30:00.666 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:30:00.666 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:30:00.666 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:00.666 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:00.666 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:30:00.666 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:00.666 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:00.666 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:00.666 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:30:00.924 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:30:00.924 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:30:00.924 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:30:00.924 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:00.924 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:00.924 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:30:00.924 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:00.924 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:00.924 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:00.924 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:00.924 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:30:01.181 10:45:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:01.439 malloc_lvol_verify 00:30:01.439 10:45:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:01.696 72d98828-a93a-4282-88e5-f5c56592aa86 00:30:01.696 10:45:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:01.953 e5aaf456-1f3f-42d2-bf42-ea01794c8ea0 00:30:01.953 10:45:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:02.211 /dev/nbd0 00:30:02.211 10:45:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:30:02.211 mke2fs 1.46.5 (30-Dec-2021) 00:30:02.211 Discarding device blocks: 0/4096 done 00:30:02.211 Creating filesystem with 4096 1k blocks and 1024 inodes 00:30:02.211 00:30:02.211 Allocating group tables: 0/1 done 00:30:02.211 Writing inode tables: 0/1 done 00:30:02.211 Creating journal (1024 blocks): done 00:30:02.211 Writing superblocks and filesystem accounting information: 0/1 done 00:30:02.211 00:30:02.211 10:45:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:30:02.211 10:45:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:02.211 10:45:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:02.211 10:45:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:02.211 10:45:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:02.211 10:45:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:02.211 10:45:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:02.211 10:45:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 2499529 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 2499529 ']' 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 2499529 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2499529 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2499529' 00:30:02.468 killing process with pid 2499529 00:30:02.468 10:45:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@969 -- # kill 2499529 00:30:02.469 10:45:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@974 -- # wait 2499529 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:30:03.034 00:30:03.034 real 0m10.301s 00:30:03.034 user 0m13.789s 00:30:03.034 sys 0m3.656s 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:30:03.034 ************************************ 00:30:03.034 END TEST bdev_nbd 00:30:03.034 ************************************ 00:30:03.034 10:45:06 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:30:03.034 10:45:06 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = nvme ']' 00:30:03.034 10:45:06 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = gpt ']' 00:30:03.034 10:45:06 blockdev_crypto_aesni -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:30:03.034 10:45:06 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:03.034 10:45:06 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:03.034 10:45:06 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:03.034 ************************************ 00:30:03.034 START TEST bdev_fio 00:30:03.034 ************************************ 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:30:03.034 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:03.034 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:30:03.035 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:30:03.035 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:03.035 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram4]' 00:30:03.035 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram4 00:30:03.035 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:30:03.035 10:45:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:03.035 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:30:03.035 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:03.035 10:45:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:30:03.295 ************************************ 00:30:03.295 START TEST bdev_fio_rw_verify 00:30:03.295 ************************************ 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:03.295 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:03.296 10:45:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:03.553 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:03.553 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:03.553 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:03.553 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:03.553 fio-3.35 00:30:03.553 Starting 4 threads 00:30:18.419 00:30:18.419 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2502001: Thu Jul 25 10:45:19 2024 00:30:18.419 read: IOPS=25.7k, BW=100MiB/s (105MB/s)(1004MiB/10001msec) 00:30:18.419 slat (usec): min=14, max=1664, avg=51.36, stdev=27.94 00:30:18.419 clat (usec): min=13, max=2215, avg=280.07, stdev=171.90 00:30:18.419 lat (usec): min=43, max=2266, avg=331.42, stdev=187.38 00:30:18.419 clat percentiles (usec): 00:30:18.419 | 50.000th=[ 243], 99.000th=[ 799], 99.900th=[ 979], 99.990th=[ 1090], 00:30:18.419 | 99.999th=[ 2114] 00:30:18.419 write: IOPS=28.1k, BW=110MiB/s (115MB/s)(1069MiB/9728msec); 0 zone resets 00:30:18.419 slat (usec): min=22, max=480, avg=61.91, stdev=27.98 00:30:18.419 clat (usec): min=28, max=1493, avg=343.55, stdev=206.04 00:30:18.419 lat (usec): min=74, max=1643, avg=405.46, stdev=221.27 00:30:18.419 clat percentiles (usec): 00:30:18.419 | 50.000th=[ 306], 99.000th=[ 996], 99.900th=[ 1221], 99.990th=[ 1369], 00:30:18.420 | 99.999th=[ 1450] 00:30:18.420 bw ( KiB/s): min=95008, max=140304, per=97.78%, avg=110023.16, stdev=2477.60, samples=76 00:30:18.420 iops : min=23752, max=35076, avg=27505.79, stdev=619.40, samples=76 00:30:18.420 lat (usec) : 20=0.01%, 50=0.01%, 100=7.95%, 250=36.46%, 500=40.87% 00:30:18.420 lat (usec) : 750=10.83%, 1000=3.34% 00:30:18.420 lat (msec) : 2=0.54%, 4=0.01% 00:30:18.420 cpu : usr=99.47%, sys=0.00%, ctx=78, majf=0, minf=288 00:30:18.420 IO depths : 1=10.2%, 2=25.6%, 4=51.2%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:18.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.420 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.420 issued rwts: total=256900,273657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:18.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:18.420 00:30:18.420 Run status group 0 (all jobs): 00:30:18.420 READ: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=1004MiB (1052MB), run=10001-10001msec 00:30:18.420 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1069MiB (1121MB), run=9728-9728msec 00:30:18.420 00:30:18.420 real 0m13.526s 00:30:18.420 user 0m43.102s 00:30:18.420 sys 0m0.468s 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:30:18.420 ************************************ 00:30:18.420 END TEST bdev_fio_rw_verify 00:30:18.420 ************************************ 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "d73a95fd-99d5-56ec-9187-3a34ca7e5473"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d73a95fd-99d5-56ec-9187-3a34ca7e5473",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "089bf40b-5678-5a31-8225-9c70c10cff64"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "089bf40b-5678-5a31-8225-9c70c10cff64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "2450a284-2bc2-57d3-ade5-f5b8fe4e906d"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "2450a284-2bc2-57d3-ade5-f5b8fe4e906d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "40190012-ace0-5c56-8469-7bd77eed14d2"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "40190012-ace0-5c56-8469-7bd77eed14d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:30:18.420 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:30:18.421 crypto_ram2 00:30:18.421 crypto_ram3 00:30:18.421 crypto_ram4 ]] 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "d73a95fd-99d5-56ec-9187-3a34ca7e5473"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d73a95fd-99d5-56ec-9187-3a34ca7e5473",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "089bf40b-5678-5a31-8225-9c70c10cff64"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "089bf40b-5678-5a31-8225-9c70c10cff64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "2450a284-2bc2-57d3-ade5-f5b8fe4e906d"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "2450a284-2bc2-57d3-ade5-f5b8fe4e906d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "40190012-ace0-5c56-8469-7bd77eed14d2"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "40190012-ace0-5c56-8469-7bd77eed14d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram4]' 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram4 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:30:18.421 ************************************ 00:30:18.421 START TEST bdev_fio_trim 00:30:18.421 ************************************ 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:18.421 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:18.422 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:18.422 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:18.422 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:18.422 10:45:20 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:18.422 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:18.422 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:18.422 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:18.422 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:18.422 fio-3.35 00:30:18.422 Starting 4 threads 00:30:30.612 00:30:30.612 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2503661: Thu Jul 25 10:45:33 2024 00:30:30.612 write: IOPS=41.3k, BW=161MiB/s (169MB/s)(1614MiB/10001msec); 0 zone resets 00:30:30.612 slat (usec): min=14, max=1481, avg=55.04, stdev=30.05 00:30:30.612 clat (usec): min=34, max=1827, avg=246.19, stdev=159.43 00:30:30.612 lat (usec): min=63, max=1885, avg=301.23, stdev=179.98 00:30:30.612 clat percentiles (usec): 00:30:30.612 | 50.000th=[ 204], 99.000th=[ 783], 99.900th=[ 898], 99.990th=[ 979], 00:30:30.612 | 99.999th=[ 1045] 00:30:30.612 bw ( KiB/s): min=149296, max=204155, per=100.00%, avg=165576.58, stdev=3971.26, samples=76 00:30:30.612 iops : min=37324, max=51038, avg=41394.11, stdev=992.79, samples=76 00:30:30.612 trim: IOPS=41.3k, BW=161MiB/s (169MB/s)(1614MiB/10001msec); 0 zone resets 00:30:30.612 slat (usec): min=5, max=154, avg=14.14, stdev= 5.84 00:30:30.612 clat (usec): min=53, max=1723, avg=232.52, stdev=103.35 00:30:30.612 lat (usec): min=63, max=1735, avg=246.66, stdev=105.32 00:30:30.612 clat percentiles (usec): 00:30:30.612 | 50.000th=[ 217], 99.000th=[ 537], 99.900th=[ 611], 99.990th=[ 668], 00:30:30.612 | 99.999th=[ 848] 00:30:30.612 bw ( KiB/s): min=149296, max=204171, per=100.00%, avg=165578.26, stdev=3971.67, samples=76 00:30:30.612 iops : min=37324, max=51042, avg=41394.53, stdev=992.89, samples=76 00:30:30.612 lat (usec) : 50=0.01%, 100=9.27%, 250=54.60%, 500=30.76%, 750=4.64% 00:30:30.612 lat (usec) : 1000=0.73% 00:30:30.612 lat (msec) : 2=0.01% 00:30:30.612 cpu : usr=99.45%, sys=0.00%, ctx=63, majf=0, minf=119 00:30:30.612 IO depths : 1=8.1%, 2=26.3%, 4=52.5%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:30.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.612 complete : 0=0.0%, 4=88.4%, 8=11.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.612 issued rwts: total=0,413086,413087,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.612 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:30.612 00:30:30.612 Run status group 0 (all jobs): 00:30:30.612 WRITE: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=1614MiB (1692MB), run=10001-10001msec 00:30:30.612 TRIM: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=1614MiB (1692MB), run=10001-10001msec 00:30:30.612 00:30:30.612 real 0m13.532s 00:30:30.612 user 0m43.045s 00:30:30.612 sys 0m0.484s 00:30:30.612 10:45:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:30.612 10:45:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:30:30.612 ************************************ 00:30:30.612 END TEST bdev_fio_trim 00:30:30.612 ************************************ 00:30:30.612 10:45:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:30:30.612 10:45:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:30.612 10:45:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:30:30.612 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:30.612 10:45:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:30:30.612 00:30:30.612 real 0m27.290s 00:30:30.612 user 1m26.280s 00:30:30.612 sys 0m1.059s 00:30:30.612 10:45:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:30.612 10:45:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:30:30.612 ************************************ 00:30:30.612 END TEST bdev_fio 00:30:30.612 ************************************ 00:30:30.612 10:45:33 blockdev_crypto_aesni -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:30.612 10:45:33 blockdev_crypto_aesni -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:30.612 10:45:33 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:30:30.612 10:45:33 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:30.612 10:45:33 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:30.612 ************************************ 00:30:30.612 START TEST bdev_verify 00:30:30.612 ************************************ 00:30:30.612 10:45:34 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:30.612 [2024-07-25 10:45:34.070154] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:30:30.612 [2024-07-25 10:45:34.070241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504895 ] 00:30:30.612 [2024-07-25 10:45:34.153547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:30.612 [2024-07-25 10:45:34.274843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.612 [2024-07-25 10:45:34.274848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.612 [2024-07-25 10:45:34.296249] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:30:30.612 [2024-07-25 10:45:34.304272] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:30.612 [2024-07-25 10:45:34.312290] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:30.870 [2024-07-25 10:45:34.423591] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:30:33.398 [2024-07-25 10:45:36.700551] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:30:33.398 [2024-07-25 10:45:36.700641] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:33.398 [2024-07-25 10:45:36.700658] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:33.398 [2024-07-25 10:45:36.708563] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:30:33.398 [2024-07-25 10:45:36.708588] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:33.398 [2024-07-25 10:45:36.708600] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:33.398 [2024-07-25 10:45:36.716586] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:30:33.398 [2024-07-25 10:45:36.716609] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:30:33.398 [2024-07-25 10:45:36.716621] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:33.398 [2024-07-25 10:45:36.724607] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:30:33.398 [2024-07-25 10:45:36.724632] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:30:33.398 [2024-07-25 10:45:36.724645] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:33.398 Running I/O for 5 seconds... 00:30:38.693 00:30:38.693 Latency(us) 00:30:38.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.694 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:38.694 Verification LBA range: start 0x0 length 0x1000 00:30:38.694 crypto_ram : 5.07 606.33 2.37 0.00 0.00 210767.12 4636.07 121945.51 00:30:38.694 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:38.694 Verification LBA range: start 0x1000 length 0x1000 00:30:38.694 crypto_ram : 5.07 606.46 2.37 0.00 0.00 210733.25 4708.88 121945.51 00:30:38.694 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:38.694 Verification LBA range: start 0x0 length 0x1000 00:30:38.694 crypto_ram2 : 5.07 606.03 2.37 0.00 0.00 210353.38 4684.61 122722.23 00:30:38.694 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:38.694 Verification LBA range: start 0x1000 length 0x1000 00:30:38.694 crypto_ram2 : 5.07 606.35 2.37 0.00 0.00 210297.71 4781.70 121945.51 00:30:38.694 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:38.694 Verification LBA range: start 0x0 length 0x1000 00:30:38.694 crypto_ram3 : 5.05 4712.86 18.41 0.00 0.00 26983.48 4393.34 20097.71 00:30:38.694 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:38.694 Verification LBA range: start 0x1000 length 0x1000 00:30:38.694 crypto_ram3 : 5.05 4713.97 18.41 0.00 0.00 26977.93 4538.97 20097.71 00:30:38.694 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:38.694 Verification LBA range: start 0x0 length 0x1000 00:30:38.694 crypto_ram4 : 5.05 4710.27 18.40 0.00 0.00 26938.57 4903.06 19903.53 00:30:38.694 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:38.694 Verification LBA range: start 0x1000 length 0x1000 00:30:38.694 crypto_ram4 : 5.05 4712.91 18.41 0.00 0.00 26931.23 4878.79 19320.98 00:30:38.694 =================================================================================================================== 00:30:38.694 Total : 21275.18 83.11 0.00 0.00 47943.85 4393.34 122722.23 00:30:38.952 00:30:38.952 real 0m8.390s 00:30:38.952 user 0m15.869s 00:30:38.952 sys 0m0.425s 00:30:38.952 10:45:42 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:38.952 10:45:42 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:30:38.952 ************************************ 00:30:38.952 END TEST bdev_verify 00:30:38.952 ************************************ 00:30:38.952 10:45:42 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:38.952 10:45:42 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:30:38.952 10:45:42 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:38.952 10:45:42 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:38.952 ************************************ 00:30:38.952 START TEST bdev_verify_big_io 00:30:38.952 ************************************ 00:30:38.953 10:45:42 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:38.953 [2024-07-25 10:45:42.507794] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:30:38.953 [2024-07-25 10:45:42.507851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505936 ] 00:30:38.953 [2024-07-25 10:45:42.592358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:39.211 [2024-07-25 10:45:42.712132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.211 [2024-07-25 10:45:42.712138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.211 [2024-07-25 10:45:42.733504] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:30:39.211 [2024-07-25 10:45:42.741533] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:39.211 [2024-07-25 10:45:42.749551] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:39.211 [2024-07-25 10:45:42.870305] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:30:41.751 [2024-07-25 10:45:45.147640] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:30:41.751 [2024-07-25 10:45:45.147744] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:41.751 [2024-07-25 10:45:45.147765] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:41.751 [2024-07-25 10:45:45.155654] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:30:41.751 [2024-07-25 10:45:45.155683] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:41.751 [2024-07-25 10:45:45.155702] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:41.751 [2024-07-25 10:45:45.163675] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:30:41.751 [2024-07-25 10:45:45.163702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:30:41.751 [2024-07-25 10:45:45.163719] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:41.751 [2024-07-25 10:45:45.171697] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:30:41.751 [2024-07-25 10:45:45.171723] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:30:41.751 [2024-07-25 10:45:45.171746] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:41.751 Running I/O for 5 seconds... 00:30:48.305 00:30:48.305 Latency(us) 00:30:48.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.305 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:48.305 Verification LBA range: start 0x0 length 0x100 00:30:48.305 crypto_ram : 5.66 45.24 2.83 0.00 0.00 2727374.70 92430.03 2386092.94 00:30:48.305 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:48.305 Verification LBA range: start 0x100 length 0x100 00:30:48.305 crypto_ram : 5.65 45.27 2.83 0.00 0.00 2724474.12 100197.26 2373665.37 00:30:48.305 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:48.305 Verification LBA range: start 0x0 length 0x100 00:30:48.305 crypto_ram2 : 5.67 47.76 2.99 0.00 0.00 2520942.60 7912.87 2348810.24 00:30:48.305 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:48.305 Verification LBA range: start 0x100 length 0x100 00:30:48.305 crypto_ram2 : 5.67 47.79 2.99 0.00 0.00 2520621.77 7864.32 2336382.67 00:30:48.305 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:48.305 Verification LBA range: start 0x0 length 0x100 00:30:48.305 crypto_ram3 : 5.51 321.51 20.09 0.00 0.00 359272.10 46797.56 546812.97 00:30:48.305 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:48.305 Verification LBA range: start 0x100 length 0x100 00:30:48.305 crypto_ram3 : 5.51 322.94 20.18 0.00 0.00 358134.12 50486.99 543706.07 00:30:48.305 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:48.305 Verification LBA range: start 0x0 length 0x100 00:30:48.305 crypto_ram4 : 5.62 341.08 21.32 0.00 0.00 329176.05 3592.34 497102.70 00:30:48.305 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:48.305 Verification LBA range: start 0x100 length 0x100 00:30:48.305 crypto_ram4 : 5.62 342.65 21.42 0.00 0.00 328030.31 4854.52 493995.80 00:30:48.305 =================================================================================================================== 00:30:48.305 Total : 1514.25 94.64 0.00 0.00 627123.47 3592.34 2386092.94 00:30:48.305 00:30:48.305 real 0m9.004s 00:30:48.305 user 0m17.060s 00:30:48.305 sys 0m0.446s 00:30:48.305 10:45:51 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:48.305 10:45:51 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:30:48.305 ************************************ 00:30:48.305 END TEST bdev_verify_big_io 00:30:48.305 ************************************ 00:30:48.305 10:45:51 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:48.305 10:45:51 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:30:48.305 10:45:51 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:48.305 10:45:51 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:48.305 ************************************ 00:30:48.305 START TEST bdev_write_zeroes 00:30:48.305 ************************************ 00:30:48.305 10:45:51 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:48.305 [2024-07-25 10:45:51.559801] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:30:48.305 [2024-07-25 10:45:51.559860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506999 ] 00:30:48.305 [2024-07-25 10:45:51.641063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.305 [2024-07-25 10:45:51.762267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.305 [2024-07-25 10:45:51.783552] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:30:48.305 [2024-07-25 10:45:51.791579] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:48.305 [2024-07-25 10:45:51.799597] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:48.305 [2024-07-25 10:45:51.915259] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:30:50.829 [2024-07-25 10:45:54.198650] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:30:50.829 [2024-07-25 10:45:54.198741] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:50.829 [2024-07-25 10:45:54.198761] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:50.829 [2024-07-25 10:45:54.206668] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:30:50.829 [2024-07-25 10:45:54.206697] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:50.829 [2024-07-25 10:45:54.206711] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:50.829 [2024-07-25 10:45:54.214688] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:30:50.829 [2024-07-25 10:45:54.214716] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:30:50.829 [2024-07-25 10:45:54.214730] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:50.830 [2024-07-25 10:45:54.222708] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:30:50.830 [2024-07-25 10:45:54.222735] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:30:50.830 [2024-07-25 10:45:54.222749] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:50.830 Running I/O for 1 seconds... 00:30:51.763 00:30:51.763 Latency(us) 00:30:51.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.763 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:51.763 crypto_ram : 1.03 1871.03 7.31 0.00 0.00 67885.29 5679.79 80390.83 00:30:51.763 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:51.763 crypto_ram2 : 1.03 1876.97 7.33 0.00 0.00 67286.95 5631.24 74953.77 00:30:51.763 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:51.763 crypto_ram3 : 1.02 14348.77 56.05 0.00 0.00 8786.16 2706.39 11456.66 00:30:51.763 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:51.763 crypto_ram4 : 1.02 14335.24 56.00 0.00 0.00 8753.73 2524.35 9175.04 00:30:51.763 =================================================================================================================== 00:30:51.763 Total : 32432.01 126.69 0.00 0.00 15594.85 2524.35 80390.83 00:30:52.328 00:30:52.328 real 0m4.325s 00:30:52.328 user 0m3.864s 00:30:52.328 sys 0m0.424s 00:30:52.328 10:45:55 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:52.328 10:45:55 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:30:52.328 ************************************ 00:30:52.328 END TEST bdev_write_zeroes 00:30:52.328 ************************************ 00:30:52.328 10:45:55 blockdev_crypto_aesni -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:52.328 10:45:55 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:30:52.328 10:45:55 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:52.328 10:45:55 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:52.328 ************************************ 00:30:52.328 START TEST bdev_json_nonenclosed 00:30:52.328 ************************************ 00:30:52.328 10:45:55 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:52.329 [2024-07-25 10:45:55.932402] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:30:52.329 [2024-07-25 10:45:55.932460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507538 ] 00:30:52.329 [2024-07-25 10:45:56.013839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.587 [2024-07-25 10:45:56.137197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.587 [2024-07-25 10:45:56.137305] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:52.587 [2024-07-25 10:45:56.137328] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:52.587 [2024-07-25 10:45:56.137341] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:52.587 00:30:52.587 real 0m0.392s 00:30:52.587 user 0m0.275s 00:30:52.587 sys 0m0.114s 00:30:52.587 10:45:56 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:52.587 10:45:56 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:30:52.587 ************************************ 00:30:52.587 END TEST bdev_json_nonenclosed 00:30:52.587 ************************************ 00:30:52.845 10:45:56 blockdev_crypto_aesni -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:52.845 10:45:56 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:30:52.845 10:45:56 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:52.845 10:45:56 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:52.845 ************************************ 00:30:52.845 START TEST bdev_json_nonarray 00:30:52.845 ************************************ 00:30:52.845 10:45:56 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:52.845 [2024-07-25 10:45:56.373276] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:30:52.845 [2024-07-25 10:45:56.373357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507569 ] 00:30:52.845 [2024-07-25 10:45:56.456361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.104 [2024-07-25 10:45:56.579211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.104 [2024-07-25 10:45:56.579313] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:53.104 [2024-07-25 10:45:56.579338] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:53.104 [2024-07-25 10:45:56.579352] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:53.104 00:30:53.104 real 0m0.392s 00:30:53.104 user 0m0.270s 00:30:53.104 sys 0m0.118s 00:30:53.104 10:45:56 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:53.104 10:45:56 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:30:53.104 ************************************ 00:30:53.104 END TEST bdev_json_nonarray 00:30:53.104 ************************************ 00:30:53.104 10:45:56 blockdev_crypto_aesni -- bdev/blockdev.sh@786 -- # [[ crypto_aesni == bdev ]] 00:30:53.104 10:45:56 blockdev_crypto_aesni -- bdev/blockdev.sh@793 -- # [[ crypto_aesni == gpt ]] 00:30:53.104 10:45:56 blockdev_crypto_aesni -- bdev/blockdev.sh@797 -- # [[ crypto_aesni == crypto_sw ]] 00:30:53.104 10:45:56 blockdev_crypto_aesni -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:30:53.104 10:45:56 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # cleanup 00:30:53.104 10:45:56 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:30:53.104 10:45:56 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:30:53.104 10:45:56 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:30:53.104 10:45:56 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:30:53.104 10:45:56 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:30:53.104 10:45:56 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:30:53.104 00:30:53.104 real 1m12.181s 00:30:53.104 user 2m35.541s 00:30:53.104 sys 0m8.241s 00:30:53.104 10:45:56 blockdev_crypto_aesni -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:53.104 10:45:56 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:53.104 ************************************ 00:30:53.104 END TEST blockdev_crypto_aesni 00:30:53.104 ************************************ 00:30:53.104 10:45:56 -- spdk/autotest.sh@362 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:30:53.104 10:45:56 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:53.104 10:45:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:53.104 10:45:56 -- common/autotest_common.sh@10 -- # set +x 00:30:53.104 ************************************ 00:30:53.104 START TEST blockdev_crypto_sw 00:30:53.104 ************************************ 00:30:53.104 10:45:56 blockdev_crypto_sw -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:30:53.362 * Looking for test storage... 00:30:53.362 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # uname -s 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@681 -- # test_type=crypto_sw 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # crypto_device= 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # dek= 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # env_ctx= 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == bdev ]] 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == crypto_* ]] 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2507646 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:53.362 10:45:56 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 2507646 00:30:53.362 10:45:56 blockdev_crypto_sw -- common/autotest_common.sh@831 -- # '[' -z 2507646 ']' 00:30:53.362 10:45:56 blockdev_crypto_sw -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.362 10:45:56 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:53.362 10:45:56 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.362 10:45:56 blockdev_crypto_sw -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:53.362 10:45:56 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:53.362 [2024-07-25 10:45:56.914036] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:30:53.362 [2024-07-25 10:45:56.914144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507646 ] 00:30:53.362 [2024-07-25 10:45:56.995531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.620 [2024-07-25 10:45:57.107004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.185 10:45:57 blockdev_crypto_sw -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:54.185 10:45:57 blockdev_crypto_sw -- common/autotest_common.sh@864 -- # return 0 00:30:54.185 10:45:57 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:30:54.185 10:45:57 blockdev_crypto_sw -- bdev/blockdev.sh@710 -- # setup_crypto_sw_conf 00:30:54.185 10:45:57 blockdev_crypto_sw -- bdev/blockdev.sh@192 -- # rpc_cmd 00:30:54.185 10:45:57 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.185 10:45:57 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:54.444 Malloc0 00:30:54.444 Malloc1 00:30:54.444 true 00:30:54.444 true 00:30:54.702 true 00:30:54.702 [2024-07-25 10:45:58.164434] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:30:54.702 crypto_ram 00:30:54.702 [2024-07-25 10:45:58.172443] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:30:54.702 crypto_ram2 00:30:54.702 [2024-07-25 10:45:58.180477] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:30:54.702 crypto_ram3 00:30:54.702 [ 00:30:54.702 { 00:30:54.702 "name": "Malloc1", 00:30:54.702 "aliases": [ 00:30:54.702 "793a5b83-73f8-4a2b-ac53-d7aa95f3cf16" 00:30:54.702 ], 00:30:54.702 "product_name": "Malloc disk", 00:30:54.702 "block_size": 4096, 00:30:54.702 "num_blocks": 4096, 00:30:54.702 "uuid": "793a5b83-73f8-4a2b-ac53-d7aa95f3cf16", 00:30:54.702 "assigned_rate_limits": { 00:30:54.702 "rw_ios_per_sec": 0, 00:30:54.702 "rw_mbytes_per_sec": 0, 00:30:54.702 "r_mbytes_per_sec": 0, 00:30:54.702 "w_mbytes_per_sec": 0 00:30:54.702 }, 00:30:54.702 "claimed": true, 00:30:54.702 "claim_type": "exclusive_write", 00:30:54.702 "zoned": false, 00:30:54.702 "supported_io_types": { 00:30:54.702 "read": true, 00:30:54.702 "write": true, 00:30:54.702 "unmap": true, 00:30:54.702 "flush": true, 00:30:54.702 "reset": true, 00:30:54.702 "nvme_admin": false, 00:30:54.702 "nvme_io": false, 00:30:54.702 "nvme_io_md": false, 00:30:54.702 "write_zeroes": true, 00:30:54.703 "zcopy": true, 00:30:54.703 "get_zone_info": false, 00:30:54.703 "zone_management": false, 00:30:54.703 "zone_append": false, 00:30:54.703 "compare": false, 00:30:54.703 "compare_and_write": false, 00:30:54.703 "abort": true, 00:30:54.703 "seek_hole": false, 00:30:54.703 "seek_data": false, 00:30:54.703 "copy": true, 00:30:54.703 "nvme_iov_md": false 00:30:54.703 }, 00:30:54.703 "memory_domains": [ 00:30:54.703 { 00:30:54.703 "dma_device_id": "system", 00:30:54.703 "dma_device_type": 1 00:30:54.703 }, 00:30:54.703 { 00:30:54.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:54.703 "dma_device_type": 2 00:30:54.703 } 00:30:54.703 ], 00:30:54.703 "driver_specific": {} 00:30:54.703 } 00:30:54.703 ] 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # cat 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r .name 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "579ef14b-c962-5bc9-8a51-058f4d44fbf4"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "579ef14b-c962-5bc9-8a51-058f4d44fbf4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "33b99d56-5153-53f2-adb6-cd2c9ba531fe"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "33b99d56-5153-53f2-adb6-cd2c9ba531fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:30:54.703 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # killprocess 2507646 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@950 -- # '[' -z 2507646 ']' 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # kill -0 2507646 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # uname 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2507646 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2507646' 00:30:54.703 killing process with pid 2507646 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@969 -- # kill 2507646 00:30:54.703 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@974 -- # wait 2507646 00:30:55.268 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:55.268 10:45:58 blockdev_crypto_sw -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:30:55.268 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:30:55.268 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:55.268 10:45:58 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:55.268 ************************************ 00:30:55.268 START TEST bdev_hello_world 00:30:55.268 ************************************ 00:30:55.268 10:45:58 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:30:55.268 [2024-07-25 10:45:58.894283] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:30:55.269 [2024-07-25 10:45:58.894363] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507921 ] 00:30:55.269 [2024-07-25 10:45:58.976158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.526 [2024-07-25 10:45:59.095129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.784 [2024-07-25 10:45:59.280935] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:30:55.785 [2024-07-25 10:45:59.281021] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:55.785 [2024-07-25 10:45:59.281040] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:55.785 [2024-07-25 10:45:59.288951] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:30:55.785 [2024-07-25 10:45:59.288979] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:55.785 [2024-07-25 10:45:59.288998] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:55.785 [2024-07-25 10:45:59.296972] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:30:55.785 [2024-07-25 10:45:59.297000] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:30:55.785 [2024-07-25 10:45:59.297020] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:55.785 [2024-07-25 10:45:59.339481] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:55.785 [2024-07-25 10:45:59.339531] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:30:55.785 [2024-07-25 10:45:59.339562] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:55.785 [2024-07-25 10:45:59.340893] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:55.785 [2024-07-25 10:45:59.340987] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:55.785 [2024-07-25 10:45:59.341010] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:55.785 [2024-07-25 10:45:59.341051] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:55.785 00:30:55.785 [2024-07-25 10:45:59.341077] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:56.043 00:30:56.043 real 0m0.746s 00:30:56.043 user 0m0.525s 00:30:56.043 sys 0m0.205s 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:30:56.043 ************************************ 00:30:56.043 END TEST bdev_hello_world 00:30:56.043 ************************************ 00:30:56.043 10:45:59 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:30:56.043 10:45:59 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:56.043 10:45:59 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:56.043 10:45:59 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:56.043 ************************************ 00:30:56.043 START TEST bdev_bounds 00:30:56.043 ************************************ 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=2508063 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 2508063' 00:30:56.043 Process bdevio pid: 2508063 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 2508063 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 2508063 ']' 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:56.043 10:45:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:30:56.043 [2024-07-25 10:45:59.693888] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:30:56.043 [2024-07-25 10:45:59.693956] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2508063 ] 00:30:56.300 [2024-07-25 10:45:59.781149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:56.300 [2024-07-25 10:45:59.909128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.300 [2024-07-25 10:45:59.909169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:56.300 [2024-07-25 10:45:59.909172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.557 [2024-07-25 10:46:00.102417] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:30:56.557 [2024-07-25 10:46:00.102492] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:56.557 [2024-07-25 10:46:00.102509] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:56.557 [2024-07-25 10:46:00.110440] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:30:56.557 [2024-07-25 10:46:00.110465] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:56.557 [2024-07-25 10:46:00.110493] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:56.557 [2024-07-25 10:46:00.118447] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:30:56.557 [2024-07-25 10:46:00.118470] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:30:56.557 [2024-07-25 10:46:00.118483] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:57.120 10:46:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:57.120 10:46:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:30:57.120 10:46:00 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:57.120 I/O targets: 00:30:57.120 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:30:57.120 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:30:57.120 00:30:57.120 00:30:57.120 CUnit - A unit testing framework for C - Version 2.1-3 00:30:57.120 http://cunit.sourceforge.net/ 00:30:57.120 00:30:57.120 00:30:57.120 Suite: bdevio tests on: crypto_ram3 00:30:57.120 Test: blockdev write read block ...passed 00:30:57.121 Test: blockdev write zeroes read block ...passed 00:30:57.121 Test: blockdev write zeroes read no split ...passed 00:30:57.121 Test: blockdev write zeroes read split ...passed 00:30:57.121 Test: blockdev write zeroes read split partial ...passed 00:30:57.121 Test: blockdev reset ...passed 00:30:57.121 Test: blockdev write read 8 blocks ...passed 00:30:57.121 Test: blockdev write read size > 128k ...passed 00:30:57.121 Test: blockdev write read invalid size ...passed 00:30:57.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:57.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:57.121 Test: blockdev write read max offset ...passed 00:30:57.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:57.121 Test: blockdev writev readv 8 blocks ...passed 00:30:57.121 Test: blockdev writev readv 30 x 1block ...passed 00:30:57.121 Test: blockdev writev readv block ...passed 00:30:57.121 Test: blockdev writev readv size > 128k ...passed 00:30:57.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:57.121 Test: blockdev comparev and writev ...passed 00:30:57.121 Test: blockdev nvme passthru rw ...passed 00:30:57.121 Test: blockdev nvme passthru vendor specific ...passed 00:30:57.121 Test: blockdev nvme admin passthru ...passed 00:30:57.121 Test: blockdev copy ...passed 00:30:57.121 Suite: bdevio tests on: crypto_ram 00:30:57.121 Test: blockdev write read block ...passed 00:30:57.121 Test: blockdev write zeroes read block ...passed 00:30:57.121 Test: blockdev write zeroes read no split ...passed 00:30:57.121 Test: blockdev write zeroes read split ...passed 00:30:57.121 Test: blockdev write zeroes read split partial ...passed 00:30:57.121 Test: blockdev reset ...passed 00:30:57.121 Test: blockdev write read 8 blocks ...passed 00:30:57.121 Test: blockdev write read size > 128k ...passed 00:30:57.121 Test: blockdev write read invalid size ...passed 00:30:57.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:57.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:57.121 Test: blockdev write read max offset ...passed 00:30:57.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:57.121 Test: blockdev writev readv 8 blocks ...passed 00:30:57.121 Test: blockdev writev readv 30 x 1block ...passed 00:30:57.121 Test: blockdev writev readv block ...passed 00:30:57.121 Test: blockdev writev readv size > 128k ...passed 00:30:57.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:57.121 Test: blockdev comparev and writev ...passed 00:30:57.121 Test: blockdev nvme passthru rw ...passed 00:30:57.121 Test: blockdev nvme passthru vendor specific ...passed 00:30:57.121 Test: blockdev nvme admin passthru ...passed 00:30:57.121 Test: blockdev copy ...passed 00:30:57.121 00:30:57.121 Run Summary: Type Total Ran Passed Failed Inactive 00:30:57.121 suites 2 2 n/a 0 0 00:30:57.121 tests 46 46 46 0 0 00:30:57.121 asserts 260 260 260 0 n/a 00:30:57.121 00:30:57.121 Elapsed time = 0.103 seconds 00:30:57.121 0 00:30:57.121 10:46:00 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 2508063 00:30:57.121 10:46:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 2508063 ']' 00:30:57.377 10:46:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 2508063 00:30:57.377 10:46:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:30:57.377 10:46:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:57.377 10:46:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2508063 00:30:57.377 10:46:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:57.377 10:46:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:57.377 10:46:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2508063' 00:30:57.377 killing process with pid 2508063 00:30:57.377 10:46:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@969 -- # kill 2508063 00:30:57.377 10:46:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@974 -- # wait 2508063 00:30:57.634 10:46:01 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:30:57.634 00:30:57.634 real 0m1.476s 00:30:57.634 user 0m3.863s 00:30:57.634 sys 0m0.342s 00:30:57.634 10:46:01 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:57.634 10:46:01 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:30:57.634 ************************************ 00:30:57.634 END TEST bdev_bounds 00:30:57.634 ************************************ 00:30:57.634 10:46:01 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:30:57.634 10:46:01 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:57.634 10:46:01 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:57.634 10:46:01 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:57.634 ************************************ 00:30:57.634 START TEST bdev_nbd 00:30:57.634 ************************************ 00:30:57.634 10:46:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:30:57.634 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:30:57.634 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:30:57.634 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:57.634 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:30:57.634 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:30:57.634 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=2508238 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 2508238 /var/tmp/spdk-nbd.sock 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 2508238 ']' 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:57.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:57.635 10:46:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:30:57.635 [2024-07-25 10:46:01.225878] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:30:57.635 [2024-07-25 10:46:01.225964] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.635 [2024-07-25 10:46:01.300033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.892 [2024-07-25 10:46:01.410985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.892 [2024-07-25 10:46:01.595093] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:30:57.892 [2024-07-25 10:46:01.595173] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:57.892 [2024-07-25 10:46:01.595191] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:58.149 [2024-07-25 10:46:01.603108] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:30:58.149 [2024-07-25 10:46:01.603149] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:58.149 [2024-07-25 10:46:01.603163] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:58.149 [2024-07-25 10:46:01.611140] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:30:58.149 [2024-07-25 10:46:01.611164] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:30:58.149 [2024-07-25 10:46:01.611176] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:58.714 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:58.972 1+0 records in 00:30:58.972 1+0 records out 00:30:58.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179187 s, 22.9 MB/s 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:58.972 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:59.230 1+0 records in 00:30:59.230 1+0 records out 00:30:59.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248241 s, 16.5 MB/s 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:59.230 10:46:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:59.487 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:59.487 { 00:30:59.487 "nbd_device": "/dev/nbd0", 00:30:59.487 "bdev_name": "crypto_ram" 00:30:59.487 }, 00:30:59.487 { 00:30:59.487 "nbd_device": "/dev/nbd1", 00:30:59.487 "bdev_name": "crypto_ram3" 00:30:59.487 } 00:30:59.487 ]' 00:30:59.487 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:59.487 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:59.487 { 00:30:59.487 "nbd_device": "/dev/nbd0", 00:30:59.487 "bdev_name": "crypto_ram" 00:30:59.487 }, 00:30:59.487 { 00:30:59.487 "nbd_device": "/dev/nbd1", 00:30:59.487 "bdev_name": "crypto_ram3" 00:30:59.487 } 00:30:59.487 ]' 00:30:59.487 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:59.487 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:59.487 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:59.487 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:59.487 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:59.487 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:59.487 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:59.487 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:59.745 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:59.745 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:59.745 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:59.745 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:59.745 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:59.745 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:59.745 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:59.745 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:59.745 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:59.745 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:00.002 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:00.002 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:00.002 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:00.003 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:00.003 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:00.003 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:00.003 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:00.003 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:00.003 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:00.003 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:00.003 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:00.260 10:46:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:31:00.517 /dev/nbd0 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:00.517 1+0 records in 00:31:00.517 1+0 records out 00:31:00.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257185 s, 15.9 MB/s 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:00.517 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:31:00.798 /dev/nbd1 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:00.798 1+0 records in 00:31:00.798 1+0 records out 00:31:00.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309345 s, 13.2 MB/s 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:00.798 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:00.799 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:00.799 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:00.799 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:01.058 { 00:31:01.058 "nbd_device": "/dev/nbd0", 00:31:01.058 "bdev_name": "crypto_ram" 00:31:01.058 }, 00:31:01.058 { 00:31:01.058 "nbd_device": "/dev/nbd1", 00:31:01.058 "bdev_name": "crypto_ram3" 00:31:01.058 } 00:31:01.058 ]' 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:01.058 { 00:31:01.058 "nbd_device": "/dev/nbd0", 00:31:01.058 "bdev_name": "crypto_ram" 00:31:01.058 }, 00:31:01.058 { 00:31:01.058 "nbd_device": "/dev/nbd1", 00:31:01.058 "bdev_name": "crypto_ram3" 00:31:01.058 } 00:31:01.058 ]' 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:01.058 /dev/nbd1' 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:01.058 /dev/nbd1' 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:01.058 256+0 records in 00:31:01.058 256+0 records out 00:31:01.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488627 s, 215 MB/s 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:01.058 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:01.316 256+0 records in 00:31:01.316 256+0 records out 00:31:01.316 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271599 s, 38.6 MB/s 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:01.316 256+0 records in 00:31:01.316 256+0 records out 00:31:01.316 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0400302 s, 26.2 MB/s 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:01.316 10:46:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:01.574 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:01.574 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:01.574 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:01.574 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:01.574 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:01.574 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:01.574 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:01.574 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:01.574 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:01.574 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:01.832 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:01.832 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:01.832 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:01.832 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:01.832 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:01.832 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:01.832 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:01.832 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:01.832 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:01.832 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:01.832 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:02.090 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:02.347 malloc_lvol_verify 00:31:02.347 10:46:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:02.604 cd610680-e837-480b-adf2-6561a803733b 00:31:02.604 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:02.861 0b7eab30-620d-4077-b071-e39be418e6d3 00:31:02.861 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:03.119 /dev/nbd0 00:31:03.119 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:03.119 mke2fs 1.46.5 (30-Dec-2021) 00:31:03.119 Discarding device blocks: 0/4096 done 00:31:03.119 Creating filesystem with 4096 1k blocks and 1024 inodes 00:31:03.119 00:31:03.119 Allocating group tables: 0/1 done 00:31:03.119 Writing inode tables: 0/1 done 00:31:03.119 Creating journal (1024 blocks): done 00:31:03.119 Writing superblocks and filesystem accounting information: 0/1 done 00:31:03.120 00:31:03.120 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:03.120 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:03.120 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:03.120 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:03.120 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:03.120 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:03.120 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:03.120 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 2508238 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 2508238 ']' 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 2508238 00:31:03.377 10:46:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:31:03.378 10:46:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:03.378 10:46:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2508238 00:31:03.378 10:46:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:03.378 10:46:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:03.378 10:46:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2508238' 00:31:03.378 killing process with pid 2508238 00:31:03.378 10:46:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@969 -- # kill 2508238 00:31:03.378 10:46:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@974 -- # wait 2508238 00:31:03.635 10:46:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:31:03.635 00:31:03.635 real 0m6.064s 00:31:03.635 user 0m8.823s 00:31:03.635 sys 0m2.167s 00:31:03.635 10:46:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:03.635 10:46:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:31:03.635 ************************************ 00:31:03.635 END TEST bdev_nbd 00:31:03.635 ************************************ 00:31:03.635 10:46:07 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:31:03.635 10:46:07 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = nvme ']' 00:31:03.636 10:46:07 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = gpt ']' 00:31:03.636 10:46:07 blockdev_crypto_sw -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:31:03.636 10:46:07 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:03.636 10:46:07 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:03.636 10:46:07 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:03.636 ************************************ 00:31:03.636 START TEST bdev_fio 00:31:03.636 ************************************ 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:31:03.636 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:03.636 ************************************ 00:31:03.636 START TEST bdev_fio_rw_verify 00:31:03.636 ************************************ 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:31:03.636 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:03.894 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:03.894 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:03.894 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.894 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.894 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:03.894 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:03.894 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:03.894 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:03.894 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:03.894 10:46:07 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:04.152 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:04.152 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:04.152 fio-3.35 00:31:04.152 Starting 2 threads 00:31:16.347 00:31:16.347 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=2509238: Thu Jul 25 10:46:18 2024 00:31:16.347 read: IOPS=24.1k, BW=94.1MiB/s (98.7MB/s)(941MiB/10000msec) 00:31:16.347 slat (usec): min=11, max=410, avg=18.99, stdev= 6.96 00:31:16.347 clat (usec): min=6, max=1697, avg=131.57, stdev=60.95 00:31:16.347 lat (usec): min=22, max=1720, avg=150.56, stdev=64.15 00:31:16.347 clat percentiles (usec): 00:31:16.347 | 50.000th=[ 125], 99.000th=[ 310], 99.900th=[ 388], 99.990th=[ 424], 00:31:16.347 | 99.999th=[ 1647] 00:31:16.347 write: IOPS=28.9k, BW=113MiB/s (118MB/s)(1071MiB/9498msec); 0 zone resets 00:31:16.347 slat (usec): min=12, max=701, avg=31.07, stdev= 8.18 00:31:16.347 clat (usec): min=21, max=1050, avg=177.09, stdev=88.42 00:31:16.347 lat (usec): min=42, max=1082, avg=208.16, stdev=91.56 00:31:16.347 clat percentiles (usec): 00:31:16.347 | 50.000th=[ 169], 99.000th=[ 412], 99.900th=[ 510], 99.990th=[ 594], 00:31:16.347 | 99.999th=[ 750] 00:31:16.347 bw ( KiB/s): min=85248, max=124440, per=94.58%, avg=109228.21, stdev=6441.21, samples=38 00:31:16.347 iops : min=21312, max=31110, avg=27307.05, stdev=1610.30, samples=38 00:31:16.347 lat (usec) : 10=0.01%, 20=0.01%, 50=6.18%, 100=21.14%, 250=59.92% 00:31:16.347 lat (usec) : 500=12.68%, 750=0.07%, 1000=0.01% 00:31:16.347 lat (msec) : 2=0.01% 00:31:16.347 cpu : usr=99.47%, sys=0.01%, ctx=45, majf=0, minf=468 00:31:16.347 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:16.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.347 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.347 issued rwts: total=240850,274224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.347 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:16.347 00:31:16.347 Run status group 0 (all jobs): 00:31:16.347 READ: bw=94.1MiB/s (98.7MB/s), 94.1MiB/s-94.1MiB/s (98.7MB/s-98.7MB/s), io=941MiB (987MB), run=10000-10000msec 00:31:16.347 WRITE: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=1071MiB (1123MB), run=9498-9498msec 00:31:16.347 00:31:16.347 real 0m10.998s 00:31:16.347 user 0m21.039s 00:31:16.347 sys 0m0.258s 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:31:16.347 ************************************ 00:31:16.347 END TEST bdev_fio_rw_verify 00:31:16.347 ************************************ 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:31:16.347 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "579ef14b-c962-5bc9-8a51-058f4d44fbf4"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "579ef14b-c962-5bc9-8a51-058f4d44fbf4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "33b99d56-5153-53f2-adb6-cd2c9ba531fe"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "33b99d56-5153-53f2-adb6-cd2c9ba531fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:31:16.348 crypto_ram3 ]] 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "579ef14b-c962-5bc9-8a51-058f4d44fbf4"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "579ef14b-c962-5bc9-8a51-058f4d44fbf4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "33b99d56-5153-53f2-adb6-cd2c9ba531fe"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "33b99d56-5153-53f2-adb6-cd2c9ba531fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:16.348 ************************************ 00:31:16.348 START TEST bdev_fio_trim 00:31:16.348 ************************************ 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:16.348 10:46:18 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:16.348 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:16.348 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:16.348 fio-3.35 00:31:16.348 Starting 2 threads 00:31:26.317 00:31:26.317 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=2510630: Thu Jul 25 10:46:29 2024 00:31:26.317 write: IOPS=44.9k, BW=175MiB/s (184MB/s)(1753MiB/10001msec); 0 zone resets 00:31:26.317 slat (usec): min=11, max=672, avg=19.63, stdev= 6.32 00:31:26.317 clat (usec): min=29, max=1704, avg=142.26, stdev=87.22 00:31:26.317 lat (usec): min=40, max=1726, avg=161.89, stdev=90.89 00:31:26.317 clat percentiles (usec): 00:31:26.317 | 50.000th=[ 111], 99.000th=[ 351], 99.900th=[ 445], 99.990th=[ 498], 00:31:26.317 | 99.999th=[ 1631] 00:31:26.317 bw ( KiB/s): min=117184, max=196768, per=99.89%, avg=179294.74, stdev=10387.80, samples=38 00:31:26.317 iops : min=29296, max=49192, avg=44823.68, stdev=2596.95, samples=38 00:31:26.317 trim: IOPS=44.9k, BW=175MiB/s (184MB/s)(1753MiB/10001msec); 0 zone resets 00:31:26.317 slat (usec): min=4, max=295, avg= 9.30, stdev= 3.74 00:31:26.317 clat (usec): min=40, max=1556, avg=93.51, stdev=34.36 00:31:26.317 lat (usec): min=46, max=1567, avg=102.81, stdev=35.45 00:31:26.317 clat percentiles (usec): 00:31:26.317 | 50.000th=[ 92], 99.000th=[ 196], 99.900th=[ 262], 99.990th=[ 293], 00:31:26.317 | 99.999th=[ 799] 00:31:26.317 bw ( KiB/s): min=117184, max=196768, per=99.89%, avg=179296.00, stdev=10387.79, samples=38 00:31:26.317 iops : min=29296, max=49192, avg=44824.00, stdev=2596.95, samples=38 00:31:26.317 lat (usec) : 50=11.45%, 100=40.07%, 250=41.72%, 500=6.76%, 750=0.01% 00:31:26.317 lat (usec) : 1000=0.01% 00:31:26.317 lat (msec) : 2=0.01% 00:31:26.317 cpu : usr=99.43%, sys=0.00%, ctx=47, majf=0, minf=272 00:31:26.317 IO depths : 1=7.5%, 2=17.5%, 4=60.0%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.317 complete : 0=0.0%, 4=87.0%, 8=13.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.317 issued rwts: total=0,448778,448779,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.317 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:26.317 00:31:26.317 Run status group 0 (all jobs): 00:31:26.317 WRITE: bw=175MiB/s (184MB/s), 175MiB/s-175MiB/s (184MB/s-184MB/s), io=1753MiB (1838MB), run=10001-10001msec 00:31:26.317 TRIM: bw=175MiB/s (184MB/s), 175MiB/s-175MiB/s (184MB/s-184MB/s), io=1753MiB (1838MB), run=10001-10001msec 00:31:26.317 00:31:26.317 real 0m11.014s 00:31:26.317 user 0m20.936s 00:31:26.317 sys 0m0.275s 00:31:26.317 10:46:29 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:26.317 10:46:29 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:31:26.317 ************************************ 00:31:26.317 END TEST bdev_fio_trim 00:31:26.317 ************************************ 00:31:26.317 10:46:29 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:31:26.317 10:46:29 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:26.317 10:46:29 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:31:26.317 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:31:26.317 10:46:29 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:31:26.317 00:31:26.317 real 0m22.230s 00:31:26.317 user 0m42.098s 00:31:26.317 sys 0m0.636s 00:31:26.317 10:46:29 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:26.317 10:46:29 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:26.317 ************************************ 00:31:26.317 END TEST bdev_fio 00:31:26.317 ************************************ 00:31:26.317 10:46:29 blockdev_crypto_sw -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:26.317 10:46:29 blockdev_crypto_sw -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:26.317 10:46:29 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:31:26.317 10:46:29 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:26.317 10:46:29 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:26.317 ************************************ 00:31:26.317 START TEST bdev_verify 00:31:26.317 ************************************ 00:31:26.317 10:46:29 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:26.318 [2024-07-25 10:46:29.615377] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:31:26.318 [2024-07-25 10:46:29.615463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511865 ] 00:31:26.318 [2024-07-25 10:46:29.705582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:26.318 [2024-07-25 10:46:29.829645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.318 [2024-07-25 10:46:29.829650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.576 [2024-07-25 10:46:30.034852] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:31:26.576 [2024-07-25 10:46:30.034946] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:26.576 [2024-07-25 10:46:30.034984] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:26.576 [2024-07-25 10:46:30.042870] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:31:26.576 [2024-07-25 10:46:30.042905] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:26.576 [2024-07-25 10:46:30.042922] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:26.576 [2024-07-25 10:46:30.050886] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:31:26.576 [2024-07-25 10:46:30.050914] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:31:26.576 [2024-07-25 10:46:30.050929] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:26.576 Running I/O for 5 seconds... 00:31:31.835 00:31:31.835 Latency(us) 00:31:31.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.835 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:31.835 Verification LBA range: start 0x0 length 0x800 00:31:31.835 crypto_ram : 5.02 7652.13 29.89 0.00 0.00 16670.39 2014.63 17961.72 00:31:31.835 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:31.835 Verification LBA range: start 0x800 length 0x800 00:31:31.835 crypto_ram : 5.02 7652.51 29.89 0.00 0.00 16669.14 2051.03 18252.99 00:31:31.835 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:31.835 Verification LBA range: start 0x0 length 0x800 00:31:31.835 crypto_ram3 : 5.02 3823.29 14.93 0.00 0.00 33314.10 9369.22 22913.33 00:31:31.835 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:31.835 Verification LBA range: start 0x800 length 0x800 00:31:31.835 crypto_ram3 : 5.02 3823.44 14.94 0.00 0.00 33311.95 9514.86 23787.14 00:31:31.835 =================================================================================================================== 00:31:31.835 Total : 22951.37 89.65 0.00 0.00 22217.52 2014.63 23787.14 00:31:31.835 00:31:31.835 real 0m5.850s 00:31:31.835 user 0m10.973s 00:31:31.835 sys 0m0.229s 00:31:31.835 10:46:35 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:31.835 10:46:35 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:31:31.835 ************************************ 00:31:31.835 END TEST bdev_verify 00:31:31.835 ************************************ 00:31:31.835 10:46:35 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:31.835 10:46:35 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:31:31.835 10:46:35 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:31.835 10:46:35 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:31.835 ************************************ 00:31:31.835 START TEST bdev_verify_big_io 00:31:31.835 ************************************ 00:31:31.835 10:46:35 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:31.835 [2024-07-25 10:46:35.503976] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:31:31.835 [2024-07-25 10:46:35.504050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512544 ] 00:31:32.092 [2024-07-25 10:46:35.588322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:32.092 [2024-07-25 10:46:35.712179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.092 [2024-07-25 10:46:35.712185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.349 [2024-07-25 10:46:35.902902] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:31:32.349 [2024-07-25 10:46:35.902985] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:32.349 [2024-07-25 10:46:35.903005] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:32.349 [2024-07-25 10:46:35.910920] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:31:32.349 [2024-07-25 10:46:35.910949] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:32.349 [2024-07-25 10:46:35.910964] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:32.349 [2024-07-25 10:46:35.918943] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:31:32.349 [2024-07-25 10:46:35.918971] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:31:32.349 [2024-07-25 10:46:35.918985] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:32.349 Running I/O for 5 seconds... 00:31:37.610 00:31:37.610 Latency(us) 00:31:37.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.610 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:37.610 Verification LBA range: start 0x0 length 0x80 00:31:37.610 crypto_ram : 5.15 720.41 45.03 0.00 0.00 174812.91 6262.33 242337.56 00:31:37.610 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:37.610 Verification LBA range: start 0x80 length 0x80 00:31:37.610 crypto_ram : 5.19 715.77 44.74 0.00 0.00 175822.12 6068.15 245444.46 00:31:37.610 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:37.610 Verification LBA range: start 0x0 length 0x80 00:31:37.610 crypto_ram3 : 5.17 371.67 23.23 0.00 0.00 330360.23 5582.70 256318.58 00:31:37.610 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:37.610 Verification LBA range: start 0x80 length 0x80 00:31:37.610 crypto_ram3 : 5.20 369.27 23.08 0.00 0.00 332368.09 5679.79 259425.47 00:31:37.610 =================================================================================================================== 00:31:37.610 Total : 2177.12 136.07 0.00 0.00 228515.23 5582.70 259425.47 00:31:37.868 00:31:37.868 real 0m6.021s 00:31:37.868 user 0m11.337s 00:31:37.868 sys 0m0.234s 00:31:37.868 10:46:41 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:37.868 10:46:41 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:31:37.868 ************************************ 00:31:37.868 END TEST bdev_verify_big_io 00:31:37.868 ************************************ 00:31:37.868 10:46:41 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:37.868 10:46:41 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:31:37.868 10:46:41 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:37.868 10:46:41 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:37.868 ************************************ 00:31:37.868 START TEST bdev_write_zeroes 00:31:37.868 ************************************ 00:31:37.868 10:46:41 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:37.868 [2024-07-25 10:46:41.567876] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:31:37.868 [2024-07-25 10:46:41.567946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513338 ] 00:31:38.127 [2024-07-25 10:46:41.650004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.127 [2024-07-25 10:46:41.767981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.412 [2024-07-25 10:46:41.949268] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:31:38.412 [2024-07-25 10:46:41.949356] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:38.412 [2024-07-25 10:46:41.949376] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:38.412 [2024-07-25 10:46:41.957285] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:31:38.412 [2024-07-25 10:46:41.957315] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:38.412 [2024-07-25 10:46:41.957329] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:38.412 [2024-07-25 10:46:41.965307] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:31:38.412 [2024-07-25 10:46:41.965335] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:31:38.412 [2024-07-25 10:46:41.965348] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:38.412 Running I/O for 1 seconds... 00:31:39.345 00:31:39.345 Latency(us) 00:31:39.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.345 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:39.345 crypto_ram : 1.01 26233.63 102.48 0.00 0.00 4865.39 1274.31 6602.15 00:31:39.345 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:39.345 crypto_ram3 : 1.01 13163.32 51.42 0.00 0.00 9651.16 1601.99 9903.22 00:31:39.345 =================================================================================================================== 00:31:39.345 Total : 39396.95 153.89 0.00 0.00 6470.94 1274.31 9903.22 00:31:39.602 00:31:39.602 real 0m1.782s 00:31:39.602 user 0m1.556s 00:31:39.602 sys 0m0.204s 00:31:39.602 10:46:43 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:39.602 10:46:43 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:31:39.602 ************************************ 00:31:39.602 END TEST bdev_write_zeroes 00:31:39.602 ************************************ 00:31:39.862 10:46:43 blockdev_crypto_sw -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:39.862 10:46:43 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:31:39.862 10:46:43 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:39.862 10:46:43 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:39.862 ************************************ 00:31:39.862 START TEST bdev_json_nonenclosed 00:31:39.862 ************************************ 00:31:39.862 10:46:43 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:39.862 [2024-07-25 10:46:43.392398] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:31:39.862 [2024-07-25 10:46:43.392492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513497 ] 00:31:39.862 [2024-07-25 10:46:43.469247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.120 [2024-07-25 10:46:43.582507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.120 [2024-07-25 10:46:43.582605] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:31:40.120 [2024-07-25 10:46:43.582625] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:40.120 [2024-07-25 10:46:43.582637] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:40.120 00:31:40.120 real 0m0.366s 00:31:40.120 user 0m0.255s 00:31:40.120 sys 0m0.108s 00:31:40.120 10:46:43 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:40.120 10:46:43 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:31:40.120 ************************************ 00:31:40.120 END TEST bdev_json_nonenclosed 00:31:40.120 ************************************ 00:31:40.120 10:46:43 blockdev_crypto_sw -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:40.120 10:46:43 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:31:40.120 10:46:43 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:40.120 10:46:43 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:40.120 ************************************ 00:31:40.120 START TEST bdev_json_nonarray 00:31:40.120 ************************************ 00:31:40.120 10:46:43 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:40.120 [2024-07-25 10:46:43.816113] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:31:40.120 [2024-07-25 10:46:43.816196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513642 ] 00:31:40.378 [2024-07-25 10:46:43.898401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.378 [2024-07-25 10:46:44.009847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.378 [2024-07-25 10:46:44.009970] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:31:40.378 [2024-07-25 10:46:44.009991] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:40.378 [2024-07-25 10:46:44.010019] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:40.637 00:31:40.637 real 0m0.380s 00:31:40.637 user 0m0.261s 00:31:40.637 sys 0m0.116s 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:31:40.637 ************************************ 00:31:40.637 END TEST bdev_json_nonarray 00:31:40.637 ************************************ 00:31:40.637 10:46:44 blockdev_crypto_sw -- bdev/blockdev.sh@786 -- # [[ crypto_sw == bdev ]] 00:31:40.637 10:46:44 blockdev_crypto_sw -- bdev/blockdev.sh@793 -- # [[ crypto_sw == gpt ]] 00:31:40.637 10:46:44 blockdev_crypto_sw -- bdev/blockdev.sh@797 -- # [[ crypto_sw == crypto_sw ]] 00:31:40.637 10:46:44 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:31:40.637 10:46:44 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:40.637 10:46:44 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:40.637 10:46:44 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:40.637 ************************************ 00:31:40.637 START TEST bdev_crypto_enomem 00:31:40.637 ************************************ 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1125 -- # bdev_crypto_enomem 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@634 -- # local base_dev=base0 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local test_dev=crypt0 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local err_dev=EE_base0 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local qd=32 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # ERR_PID=2513663 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # waitforlisten 2513663 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@831 -- # '[' -z 2513663 ']' 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:40.637 10:46:44 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:40.637 [2024-07-25 10:46:44.246160] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:31:40.637 [2024-07-25 10:46:44.246228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513663 ] 00:31:40.637 [2024-07-25 10:46:44.330457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.895 [2024-07-25 10:46:44.450049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@864 -- # return 0 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@644 -- # rpc_cmd 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:41.828 true 00:31:41.828 base0 00:31:41.828 true 00:31:41.828 [2024-07-25 10:46:45.256544] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:31:41.828 crypt0 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@651 -- # waitforbdev crypt0 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local bdev_name=crypt0 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@901 -- # local i 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:41.828 [ 00:31:41.828 { 00:31:41.828 "name": "crypt0", 00:31:41.828 "aliases": [ 00:31:41.828 "261a7318-f44e-5c32-85e0-21eb4811b64c" 00:31:41.828 ], 00:31:41.828 "product_name": "crypto", 00:31:41.828 "block_size": 512, 00:31:41.828 "num_blocks": 2097152, 00:31:41.828 "uuid": "261a7318-f44e-5c32-85e0-21eb4811b64c", 00:31:41.828 "assigned_rate_limits": { 00:31:41.828 "rw_ios_per_sec": 0, 00:31:41.828 "rw_mbytes_per_sec": 0, 00:31:41.828 "r_mbytes_per_sec": 0, 00:31:41.828 "w_mbytes_per_sec": 0 00:31:41.828 }, 00:31:41.828 "claimed": false, 00:31:41.828 "zoned": false, 00:31:41.828 "supported_io_types": { 00:31:41.828 "read": true, 00:31:41.828 "write": true, 00:31:41.828 "unmap": false, 00:31:41.828 "flush": false, 00:31:41.828 "reset": true, 00:31:41.828 "nvme_admin": false, 00:31:41.828 "nvme_io": false, 00:31:41.828 "nvme_io_md": false, 00:31:41.828 "write_zeroes": true, 00:31:41.828 "zcopy": false, 00:31:41.828 "get_zone_info": false, 00:31:41.828 "zone_management": false, 00:31:41.828 "zone_append": false, 00:31:41.828 "compare": false, 00:31:41.828 "compare_and_write": false, 00:31:41.828 "abort": false, 00:31:41.828 "seek_hole": false, 00:31:41.828 "seek_data": false, 00:31:41.828 "copy": false, 00:31:41.828 "nvme_iov_md": false 00:31:41.828 }, 00:31:41.828 "memory_domains": [ 00:31:41.828 { 00:31:41.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:41.828 "dma_device_type": 2 00:31:41.828 } 00:31:41.828 ], 00:31:41.828 "driver_specific": { 00:31:41.828 "crypto": { 00:31:41.828 "base_bdev_name": "EE_base0", 00:31:41.828 "name": "crypt0", 00:31:41.828 "key_name": "test_dek_sw" 00:31:41.828 } 00:31:41.828 } 00:31:41.828 } 00:31:41.828 ] 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@907 -- # return 0 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # rpcpid=2513799 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@656 -- # sleep 1 00:31:41.828 10:46:45 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:41.828 Running I/O for 5 seconds... 00:31:42.761 10:46:46 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:31:42.761 10:46:46 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.761 10:46:46 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:42.761 10:46:46 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.761 10:46:46 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@659 -- # wait 2513799 00:31:46.941 00:31:46.941 Latency(us) 00:31:46.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.941 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:31:46.941 crypt0 : 5.00 38733.38 151.30 0.00 0.00 822.76 323.13 1243.97 00:31:46.941 =================================================================================================================== 00:31:46.941 Total : 38733.38 151.30 0.00 0.00 822.76 323.13 1243.97 00:31:46.941 0 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@661 -- # rpc_cmd bdev_crypto_delete crypt0 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@663 -- # killprocess 2513663 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@950 -- # '[' -z 2513663 ']' 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # kill -0 2513663 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # uname 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2513663 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2513663' 00:31:46.942 killing process with pid 2513663 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@969 -- # kill 2513663 00:31:46.942 Received shutdown signal, test time was about 5.000000 seconds 00:31:46.942 00:31:46.942 Latency(us) 00:31:46.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.942 =================================================================================================================== 00:31:46.942 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:46.942 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@974 -- # wait 2513663 00:31:47.200 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # trap - SIGINT SIGTERM EXIT 00:31:47.200 00:31:47.200 real 0m6.509s 00:31:47.200 user 0m6.788s 00:31:47.200 sys 0m0.348s 00:31:47.200 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:47.200 10:46:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:31:47.200 ************************************ 00:31:47.200 END TEST bdev_crypto_enomem 00:31:47.200 ************************************ 00:31:47.200 10:46:50 blockdev_crypto_sw -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:31:47.200 10:46:50 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # cleanup 00:31:47.200 10:46:50 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:31:47.200 10:46:50 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:47.200 10:46:50 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:31:47.200 10:46:50 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:31:47.200 10:46:50 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:31:47.200 10:46:50 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:31:47.200 00:31:47.200 real 0m53.938s 00:31:47.200 user 1m28.787s 00:31:47.200 sys 0m5.437s 00:31:47.200 10:46:50 blockdev_crypto_sw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:47.200 10:46:50 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:47.200 ************************************ 00:31:47.200 END TEST blockdev_crypto_sw 00:31:47.200 ************************************ 00:31:47.200 10:46:50 -- spdk/autotest.sh@363 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:31:47.200 10:46:50 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:47.200 10:46:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:47.200 10:46:50 -- common/autotest_common.sh@10 -- # set +x 00:31:47.200 ************************************ 00:31:47.200 START TEST blockdev_crypto_qat 00:31:47.200 ************************************ 00:31:47.200 10:46:50 blockdev_crypto_qat -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:31:47.200 * Looking for test storage... 00:31:47.200 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # uname -s 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@681 -- # test_type=crypto_qat 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # crypto_device= 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # dek= 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # env_ctx= 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == bdev ]] 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == crypto_* ]] 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2514518 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:47.200 10:46:50 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 2514518 00:31:47.200 10:46:50 blockdev_crypto_qat -- common/autotest_common.sh@831 -- # '[' -z 2514518 ']' 00:31:47.200 10:46:50 blockdev_crypto_qat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.200 10:46:50 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:47.200 10:46:50 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.200 10:46:50 blockdev_crypto_qat -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:47.200 10:46:50 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:47.200 [2024-07-25 10:46:50.904905] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:31:47.200 [2024-07-25 10:46:50.904980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2514518 ] 00:31:47.458 [2024-07-25 10:46:50.982139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.458 [2024-07-25 10:46:51.090311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.391 10:46:51 blockdev_crypto_qat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:48.391 10:46:51 blockdev_crypto_qat -- common/autotest_common.sh@864 -- # return 0 00:31:48.391 10:46:51 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:31:48.391 10:46:51 blockdev_crypto_qat -- bdev/blockdev.sh@707 -- # setup_crypto_qat_conf 00:31:48.391 10:46:51 blockdev_crypto_qat -- bdev/blockdev.sh@169 -- # rpc_cmd 00:31:48.391 10:46:51 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.391 10:46:51 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:48.391 [2024-07-25 10:46:51.880839] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:31:48.391 [2024-07-25 10:46:51.888858] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:48.391 [2024-07-25 10:46:51.896877] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:48.391 [2024-07-25 10:46:51.969169] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:31:50.917 true 00:31:50.917 true 00:31:50.917 true 00:31:50.917 true 00:31:50.917 Malloc0 00:31:50.917 Malloc1 00:31:50.917 Malloc2 00:31:50.917 Malloc3 00:31:50.917 [2024-07-25 10:46:54.419184] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:31:50.917 crypto_ram 00:31:50.917 [2024-07-25 10:46:54.427192] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:31:50.917 crypto_ram1 00:31:50.917 [2024-07-25 10:46:54.435213] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:31:50.917 crypto_ram2 00:31:50.917 [2024-07-25 10:46:54.443234] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:31:50.917 crypto_ram3 00:31:50.917 [ 00:31:50.917 { 00:31:50.917 "name": "Malloc1", 00:31:50.917 "aliases": [ 00:31:50.917 "3567c59a-9101-4422-83dc-b673606b7d89" 00:31:50.917 ], 00:31:50.917 "product_name": "Malloc disk", 00:31:50.917 "block_size": 512, 00:31:50.917 "num_blocks": 65536, 00:31:50.917 "uuid": "3567c59a-9101-4422-83dc-b673606b7d89", 00:31:50.917 "assigned_rate_limits": { 00:31:50.917 "rw_ios_per_sec": 0, 00:31:50.917 "rw_mbytes_per_sec": 0, 00:31:50.917 "r_mbytes_per_sec": 0, 00:31:50.917 "w_mbytes_per_sec": 0 00:31:50.917 }, 00:31:50.917 "claimed": true, 00:31:50.917 "claim_type": "exclusive_write", 00:31:50.917 "zoned": false, 00:31:50.917 "supported_io_types": { 00:31:50.917 "read": true, 00:31:50.917 "write": true, 00:31:50.917 "unmap": true, 00:31:50.917 "flush": true, 00:31:50.917 "reset": true, 00:31:50.917 "nvme_admin": false, 00:31:50.917 "nvme_io": false, 00:31:50.917 "nvme_io_md": false, 00:31:50.917 "write_zeroes": true, 00:31:50.917 "zcopy": true, 00:31:50.917 "get_zone_info": false, 00:31:50.918 "zone_management": false, 00:31:50.918 "zone_append": false, 00:31:50.918 "compare": false, 00:31:50.918 "compare_and_write": false, 00:31:50.918 "abort": true, 00:31:50.918 "seek_hole": false, 00:31:50.918 "seek_data": false, 00:31:50.918 "copy": true, 00:31:50.918 "nvme_iov_md": false 00:31:50.918 }, 00:31:50.918 "memory_domains": [ 00:31:50.918 { 00:31:50.918 "dma_device_id": "system", 00:31:50.918 "dma_device_type": 1 00:31:50.918 }, 00:31:50.918 { 00:31:50.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:50.918 "dma_device_type": 2 00:31:50.918 } 00:31:50.918 ], 00:31:50.918 "driver_specific": {} 00:31:50.918 } 00:31:50.918 ] 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # cat 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r .name 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "a953661c-df72-58fc-974d-1a1223873b23"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a953661c-df72-58fc-974d-1a1223873b23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "843b01dd-8d9f-5090-8713-1fcc81e1aef0"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "843b01dd-8d9f-5090-8713-1fcc81e1aef0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "a688f97a-f674-5fca-af94-329872e023e6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "a688f97a-f674-5fca-af94-329872e023e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "893b8e1d-cf44-5629-8d51-5e07e61a899c"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "893b8e1d-cf44-5629-8d51-5e07e61a899c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:31:50.918 10:46:54 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # killprocess 2514518 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@950 -- # '[' -z 2514518 ']' 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # kill -0 2514518 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # uname 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2514518 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2514518' 00:31:50.918 killing process with pid 2514518 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@969 -- # kill 2514518 00:31:50.918 10:46:54 blockdev_crypto_qat -- common/autotest_common.sh@974 -- # wait 2514518 00:31:51.850 10:46:55 blockdev_crypto_qat -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:51.850 10:46:55 blockdev_crypto_qat -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:31:51.850 10:46:55 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:31:51.850 10:46:55 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:51.850 10:46:55 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:51.850 ************************************ 00:31:51.850 START TEST bdev_hello_world 00:31:51.850 ************************************ 00:31:51.850 10:46:55 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:31:51.850 [2024-07-25 10:46:55.367471] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:31:51.850 [2024-07-25 10:46:55.367546] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2515058 ] 00:31:51.850 [2024-07-25 10:46:55.457816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.108 [2024-07-25 10:46:55.580833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.108 [2024-07-25 10:46:55.602146] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:31:52.108 [2024-07-25 10:46:55.610170] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:52.108 [2024-07-25 10:46:55.618188] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:52.108 [2024-07-25 10:46:55.734241] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:31:54.635 [2024-07-25 10:46:57.991417] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:31:54.635 [2024-07-25 10:46:57.991510] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:54.635 [2024-07-25 10:46:57.991530] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:54.635 [2024-07-25 10:46:57.999431] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:31:54.635 [2024-07-25 10:46:57.999460] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:54.635 [2024-07-25 10:46:57.999474] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:54.635 [2024-07-25 10:46:58.007451] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:31:54.635 [2024-07-25 10:46:58.007478] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:54.635 [2024-07-25 10:46:58.007493] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:54.635 [2024-07-25 10:46:58.015472] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:31:54.635 [2024-07-25 10:46:58.015499] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:54.635 [2024-07-25 10:46:58.015513] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:54.635 [2024-07-25 10:46:58.101481] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:54.635 [2024-07-25 10:46:58.101534] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:31:54.635 [2024-07-25 10:46:58.101555] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:54.635 [2024-07-25 10:46:58.102840] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:54.636 [2024-07-25 10:46:58.102920] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:54.636 [2024-07-25 10:46:58.102943] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:54.636 [2024-07-25 10:46:58.103011] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:54.636 00:31:54.636 [2024-07-25 10:46:58.103038] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:54.893 00:31:54.893 real 0m3.249s 00:31:54.893 user 0m2.798s 00:31:54.893 sys 0m0.418s 00:31:54.893 10:46:58 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:54.893 10:46:58 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:31:54.893 ************************************ 00:31:54.893 END TEST bdev_hello_world 00:31:54.893 ************************************ 00:31:54.893 10:46:58 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:31:54.893 10:46:58 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:54.893 10:46:58 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:54.893 10:46:58 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:55.151 ************************************ 00:31:55.151 START TEST bdev_bounds 00:31:55.151 ************************************ 00:31:55.151 10:46:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:31:55.151 10:46:58 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=2515470 00:31:55.151 10:46:58 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:31:55.151 10:46:58 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:55.151 10:46:58 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 2515470' 00:31:55.151 Process bdevio pid: 2515470 00:31:55.151 10:46:58 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 2515470 00:31:55.151 10:46:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 2515470 ']' 00:31:55.151 10:46:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.152 10:46:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:55.152 10:46:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.152 10:46:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:55.152 10:46:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:55.152 [2024-07-25 10:46:58.657593] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:31:55.152 [2024-07-25 10:46:58.657674] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2515470 ] 00:31:55.152 [2024-07-25 10:46:58.734630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:55.152 [2024-07-25 10:46:58.843993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.152 [2024-07-25 10:46:58.844057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.152 [2024-07-25 10:46:58.844060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.410 [2024-07-25 10:46:58.865366] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:31:55.410 [2024-07-25 10:46:58.873409] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:55.410 [2024-07-25 10:46:58.881423] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:55.410 [2024-07-25 10:46:58.988563] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:31:57.935 [2024-07-25 10:47:01.225183] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:31:57.935 [2024-07-25 10:47:01.225269] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:57.935 [2024-07-25 10:47:01.225286] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:57.935 [2024-07-25 10:47:01.233201] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:31:57.935 [2024-07-25 10:47:01.233226] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:57.935 [2024-07-25 10:47:01.233239] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:57.935 [2024-07-25 10:47:01.241223] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:31:57.935 [2024-07-25 10:47:01.241246] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:57.935 [2024-07-25 10:47:01.241266] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:57.935 [2024-07-25 10:47:01.249246] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:31:57.935 [2024-07-25 10:47:01.249269] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:57.935 [2024-07-25 10:47:01.249281] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:57.935 10:47:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:57.935 10:47:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:31:57.935 10:47:01 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:57.935 I/O targets: 00:31:57.935 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:31:57.935 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:31:57.935 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:31:57.935 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:31:57.935 00:31:57.935 00:31:57.935 CUnit - A unit testing framework for C - Version 2.1-3 00:31:57.935 http://cunit.sourceforge.net/ 00:31:57.935 00:31:57.935 00:31:57.935 Suite: bdevio tests on: crypto_ram3 00:31:57.935 Test: blockdev write read block ...passed 00:31:57.935 Test: blockdev write zeroes read block ...passed 00:31:57.935 Test: blockdev write zeroes read no split ...passed 00:31:57.935 Test: blockdev write zeroes read split ...passed 00:31:57.935 Test: blockdev write zeroes read split partial ...passed 00:31:57.935 Test: blockdev reset ...passed 00:31:57.935 Test: blockdev write read 8 blocks ...passed 00:31:57.935 Test: blockdev write read size > 128k ...passed 00:31:57.935 Test: blockdev write read invalid size ...passed 00:31:57.935 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:57.935 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:57.935 Test: blockdev write read max offset ...passed 00:31:57.935 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:57.935 Test: blockdev writev readv 8 blocks ...passed 00:31:57.935 Test: blockdev writev readv 30 x 1block ...passed 00:31:57.935 Test: blockdev writev readv block ...passed 00:31:57.935 Test: blockdev writev readv size > 128k ...passed 00:31:57.935 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:57.935 Test: blockdev comparev and writev ...passed 00:31:57.935 Test: blockdev nvme passthru rw ...passed 00:31:57.935 Test: blockdev nvme passthru vendor specific ...passed 00:31:57.935 Test: blockdev nvme admin passthru ...passed 00:31:57.935 Test: blockdev copy ...passed 00:31:57.935 Suite: bdevio tests on: crypto_ram2 00:31:57.935 Test: blockdev write read block ...passed 00:31:57.935 Test: blockdev write zeroes read block ...passed 00:31:57.935 Test: blockdev write zeroes read no split ...passed 00:31:57.935 Test: blockdev write zeroes read split ...passed 00:31:57.935 Test: blockdev write zeroes read split partial ...passed 00:31:57.935 Test: blockdev reset ...passed 00:31:57.935 Test: blockdev write read 8 blocks ...passed 00:31:57.935 Test: blockdev write read size > 128k ...passed 00:31:57.935 Test: blockdev write read invalid size ...passed 00:31:57.935 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:57.935 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:57.935 Test: blockdev write read max offset ...passed 00:31:57.935 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:57.935 Test: blockdev writev readv 8 blocks ...passed 00:31:57.935 Test: blockdev writev readv 30 x 1block ...passed 00:31:57.935 Test: blockdev writev readv block ...passed 00:31:57.935 Test: blockdev writev readv size > 128k ...passed 00:31:57.935 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:57.935 Test: blockdev comparev and writev ...passed 00:31:57.935 Test: blockdev nvme passthru rw ...passed 00:31:57.935 Test: blockdev nvme passthru vendor specific ...passed 00:31:57.935 Test: blockdev nvme admin passthru ...passed 00:31:57.936 Test: blockdev copy ...passed 00:31:57.936 Suite: bdevio tests on: crypto_ram1 00:31:57.936 Test: blockdev write read block ...passed 00:31:57.936 Test: blockdev write zeroes read block ...passed 00:31:57.936 Test: blockdev write zeroes read no split ...passed 00:31:57.936 Test: blockdev write zeroes read split ...passed 00:31:57.936 Test: blockdev write zeroes read split partial ...passed 00:31:57.936 Test: blockdev reset ...passed 00:31:57.936 Test: blockdev write read 8 blocks ...passed 00:31:57.936 Test: blockdev write read size > 128k ...passed 00:31:57.936 Test: blockdev write read invalid size ...passed 00:31:57.936 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:57.936 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:57.936 Test: blockdev write read max offset ...passed 00:31:57.936 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:57.936 Test: blockdev writev readv 8 blocks ...passed 00:31:57.936 Test: blockdev writev readv 30 x 1block ...passed 00:31:57.936 Test: blockdev writev readv block ...passed 00:31:57.936 Test: blockdev writev readv size > 128k ...passed 00:31:57.936 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:57.936 Test: blockdev comparev and writev ...passed 00:31:57.936 Test: blockdev nvme passthru rw ...passed 00:31:57.936 Test: blockdev nvme passthru vendor specific ...passed 00:31:57.936 Test: blockdev nvme admin passthru ...passed 00:31:57.936 Test: blockdev copy ...passed 00:31:57.936 Suite: bdevio tests on: crypto_ram 00:31:57.936 Test: blockdev write read block ...passed 00:31:57.936 Test: blockdev write zeroes read block ...passed 00:31:58.193 Test: blockdev write zeroes read no split ...passed 00:31:58.193 Test: blockdev write zeroes read split ...passed 00:31:58.193 Test: blockdev write zeroes read split partial ...passed 00:31:58.193 Test: blockdev reset ...passed 00:31:58.193 Test: blockdev write read 8 blocks ...passed 00:31:58.193 Test: blockdev write read size > 128k ...passed 00:31:58.193 Test: blockdev write read invalid size ...passed 00:31:58.193 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:58.193 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:58.193 Test: blockdev write read max offset ...passed 00:31:58.193 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:58.193 Test: blockdev writev readv 8 blocks ...passed 00:31:58.193 Test: blockdev writev readv 30 x 1block ...passed 00:31:58.193 Test: blockdev writev readv block ...passed 00:31:58.193 Test: blockdev writev readv size > 128k ...passed 00:31:58.193 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:58.193 Test: blockdev comparev and writev ...passed 00:31:58.193 Test: blockdev nvme passthru rw ...passed 00:31:58.193 Test: blockdev nvme passthru vendor specific ...passed 00:31:58.193 Test: blockdev nvme admin passthru ...passed 00:31:58.193 Test: blockdev copy ...passed 00:31:58.193 00:31:58.193 Run Summary: Type Total Ran Passed Failed Inactive 00:31:58.193 suites 4 4 n/a 0 0 00:31:58.193 tests 92 92 92 0 0 00:31:58.193 asserts 520 520 520 0 n/a 00:31:58.193 00:31:58.193 Elapsed time = 0.637 seconds 00:31:58.193 0 00:31:58.193 10:47:01 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 2515470 00:31:58.193 10:47:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 2515470 ']' 00:31:58.193 10:47:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 2515470 00:31:58.193 10:47:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:31:58.194 10:47:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:58.194 10:47:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2515470 00:31:58.194 10:47:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:58.194 10:47:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:58.194 10:47:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2515470' 00:31:58.194 killing process with pid 2515470 00:31:58.194 10:47:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@969 -- # kill 2515470 00:31:58.194 10:47:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@974 -- # wait 2515470 00:31:58.758 10:47:02 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:31:58.758 00:31:58.758 real 0m3.695s 00:31:58.758 user 0m10.319s 00:31:58.758 sys 0m0.533s 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:58.759 ************************************ 00:31:58.759 END TEST bdev_bounds 00:31:58.759 ************************************ 00:31:58.759 10:47:02 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:31:58.759 10:47:02 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:58.759 10:47:02 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:58.759 10:47:02 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:58.759 ************************************ 00:31:58.759 START TEST bdev_nbd 00:31:58.759 ************************************ 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=2515901 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 2515901 /var/tmp/spdk-nbd.sock 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 2515901 ']' 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:58.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:58.759 10:47:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:31:58.759 [2024-07-25 10:47:02.407544] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:31:58.759 [2024-07-25 10:47:02.407629] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.018 [2024-07-25 10:47:02.483795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.018 [2024-07-25 10:47:02.590484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.018 [2024-07-25 10:47:02.611625] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:31:59.018 [2024-07-25 10:47:02.619646] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:59.018 [2024-07-25 10:47:02.627662] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:59.277 [2024-07-25 10:47:02.752443] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:32:01.833 [2024-07-25 10:47:05.012819] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:32:01.833 [2024-07-25 10:47:05.012902] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:01.833 [2024-07-25 10:47:05.012919] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:01.833 [2024-07-25 10:47:05.020834] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:32:01.833 [2024-07-25 10:47:05.020874] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:01.833 [2024-07-25 10:47:05.020888] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:01.833 [2024-07-25 10:47:05.028854] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:32:01.833 [2024-07-25 10:47:05.028877] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:01.833 [2024-07-25 10:47:05.028904] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:01.833 [2024-07-25 10:47:05.036875] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:32:01.833 [2024-07-25 10:47:05.036897] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:01.833 [2024-07-25 10:47:05.036924] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:01.833 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:01.834 1+0 records in 00:32:01.834 1+0 records out 00:32:01.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290078 s, 14.1 MB/s 00:32:01.834 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:01.834 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:01.834 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:01.834 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:01.834 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:01.834 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:01.834 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:01.834 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:02.091 1+0 records in 00:32:02.091 1+0 records out 00:32:02.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271398 s, 15.1 MB/s 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:02.091 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:32:02.348 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:02.349 1+0 records in 00:32:02.349 1+0 records out 00:32:02.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266949 s, 15.3 MB/s 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:02.349 10:47:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:02.606 1+0 records in 00:32:02.606 1+0 records out 00:32:02.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241353 s, 17.0 MB/s 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:02.606 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:02.864 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:32:02.864 { 00:32:02.864 "nbd_device": "/dev/nbd0", 00:32:02.864 "bdev_name": "crypto_ram" 00:32:02.864 }, 00:32:02.864 { 00:32:02.864 "nbd_device": "/dev/nbd1", 00:32:02.864 "bdev_name": "crypto_ram1" 00:32:02.864 }, 00:32:02.864 { 00:32:02.864 "nbd_device": "/dev/nbd2", 00:32:02.864 "bdev_name": "crypto_ram2" 00:32:02.864 }, 00:32:02.864 { 00:32:02.864 "nbd_device": "/dev/nbd3", 00:32:02.864 "bdev_name": "crypto_ram3" 00:32:02.864 } 00:32:02.864 ]' 00:32:02.864 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:32:02.864 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:32:02.864 { 00:32:02.864 "nbd_device": "/dev/nbd0", 00:32:02.864 "bdev_name": "crypto_ram" 00:32:02.864 }, 00:32:02.864 { 00:32:02.864 "nbd_device": "/dev/nbd1", 00:32:02.864 "bdev_name": "crypto_ram1" 00:32:02.864 }, 00:32:02.864 { 00:32:02.864 "nbd_device": "/dev/nbd2", 00:32:02.864 "bdev_name": "crypto_ram2" 00:32:02.864 }, 00:32:02.864 { 00:32:02.864 "nbd_device": "/dev/nbd3", 00:32:02.864 "bdev_name": "crypto_ram3" 00:32:02.864 } 00:32:02.864 ]' 00:32:02.864 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:03.122 10:47:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:03.380 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:03.380 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:03.380 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:03.380 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:03.380 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:03.380 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:03.380 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:03.380 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:03.380 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:03.380 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:32:03.638 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:32:03.638 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:32:03.638 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:32:03.638 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:03.638 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:03.638 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:32:03.638 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:03.638 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:03.638 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:03.638 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:32:03.897 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:32:03.897 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:32:03.897 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:32:03.897 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:03.897 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:03.897 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:32:03.897 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:03.897 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:03.897 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:03.897 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:03.897 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:04.155 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:04.155 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:04.156 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:04.414 10:47:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:32:04.414 /dev/nbd0 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:04.673 1+0 records in 00:32:04.673 1+0 records out 00:32:04.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269872 s, 15.2 MB/s 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:04.673 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:32:04.931 /dev/nbd1 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:04.931 1+0 records in 00:32:04.931 1+0 records out 00:32:04.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210426 s, 19.5 MB/s 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:04.931 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:32:05.189 /dev/nbd10 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:05.189 1+0 records in 00:32:05.189 1+0 records out 00:32:05.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258999 s, 15.8 MB/s 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:05.189 10:47:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:32:05.447 /dev/nbd11 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:05.447 1+0 records in 00:32:05.447 1+0 records out 00:32:05.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267668 s, 15.3 MB/s 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:05.447 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:05.704 { 00:32:05.704 "nbd_device": "/dev/nbd0", 00:32:05.704 "bdev_name": "crypto_ram" 00:32:05.704 }, 00:32:05.704 { 00:32:05.704 "nbd_device": "/dev/nbd1", 00:32:05.704 "bdev_name": "crypto_ram1" 00:32:05.704 }, 00:32:05.704 { 00:32:05.704 "nbd_device": "/dev/nbd10", 00:32:05.704 "bdev_name": "crypto_ram2" 00:32:05.704 }, 00:32:05.704 { 00:32:05.704 "nbd_device": "/dev/nbd11", 00:32:05.704 "bdev_name": "crypto_ram3" 00:32:05.704 } 00:32:05.704 ]' 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:05.704 { 00:32:05.704 "nbd_device": "/dev/nbd0", 00:32:05.704 "bdev_name": "crypto_ram" 00:32:05.704 }, 00:32:05.704 { 00:32:05.704 "nbd_device": "/dev/nbd1", 00:32:05.704 "bdev_name": "crypto_ram1" 00:32:05.704 }, 00:32:05.704 { 00:32:05.704 "nbd_device": "/dev/nbd10", 00:32:05.704 "bdev_name": "crypto_ram2" 00:32:05.704 }, 00:32:05.704 { 00:32:05.704 "nbd_device": "/dev/nbd11", 00:32:05.704 "bdev_name": "crypto_ram3" 00:32:05.704 } 00:32:05.704 ]' 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:32:05.704 /dev/nbd1 00:32:05.704 /dev/nbd10 00:32:05.704 /dev/nbd11' 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:32:05.704 /dev/nbd1 00:32:05.704 /dev/nbd10 00:32:05.704 /dev/nbd11' 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:32:05.704 256+0 records in 00:32:05.704 256+0 records out 00:32:05.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490767 s, 214 MB/s 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:05.704 256+0 records in 00:32:05.704 256+0 records out 00:32:05.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0575698 s, 18.2 MB/s 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:05.704 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:32:05.962 256+0 records in 00:32:05.962 256+0 records out 00:32:05.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0489333 s, 21.4 MB/s 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:32:05.962 256+0 records in 00:32:05.962 256+0 records out 00:32:05.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0432484 s, 24.2 MB/s 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:32:05.962 256+0 records in 00:32:05.962 256+0 records out 00:32:05.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0411645 s, 25.5 MB/s 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:05.962 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:05.963 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:06.220 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:06.220 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:06.220 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:06.220 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:06.220 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:06.220 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:06.220 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:06.220 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:06.220 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:06.220 10:47:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:06.478 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:06.478 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:06.478 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:06.478 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:06.478 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:06.478 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:06.478 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:06.478 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:06.478 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:06.478 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:32:06.735 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:32:06.735 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:32:06.735 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:32:06.735 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:06.735 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:06.735 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:32:06.735 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:06.735 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:06.735 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:06.735 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:32:06.993 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:32:06.993 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:32:06.993 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:32:06.993 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:06.993 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:06.993 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:32:06.993 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:06.993 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:06.993 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:06.993 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:06.993 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:32:07.251 10:47:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:32:07.507 malloc_lvol_verify 00:32:07.507 10:47:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:32:07.764 859e534e-0a0a-4e35-b2d0-6146935ba2a6 00:32:07.764 10:47:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:32:08.022 4144b39e-e57e-4677-9ac8-d4804fde1e38 00:32:08.022 10:47:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:32:08.281 /dev/nbd0 00:32:08.281 10:47:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:32:08.281 mke2fs 1.46.5 (30-Dec-2021) 00:32:08.281 Discarding device blocks: 0/4096 done 00:32:08.281 Creating filesystem with 4096 1k blocks and 1024 inodes 00:32:08.281 00:32:08.281 Allocating group tables: 0/1 done 00:32:08.281 Writing inode tables: 0/1 done 00:32:08.281 Creating journal (1024 blocks): done 00:32:08.281 Writing superblocks and filesystem accounting information: 0/1 done 00:32:08.281 00:32:08.281 10:47:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:32:08.281 10:47:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:08.281 10:47:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:08.281 10:47:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:08.281 10:47:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:08.281 10:47:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:08.281 10:47:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:08.281 10:47:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:08.538 10:47:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:08.538 10:47:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:08.538 10:47:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 2515901 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 2515901 ']' 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 2515901 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2515901 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2515901' 00:32:08.539 killing process with pid 2515901 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@969 -- # kill 2515901 00:32:08.539 10:47:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@974 -- # wait 2515901 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:32:09.103 00:32:09.103 real 0m10.356s 00:32:09.103 user 0m13.884s 00:32:09.103 sys 0m3.705s 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:09.103 ************************************ 00:32:09.103 END TEST bdev_nbd 00:32:09.103 ************************************ 00:32:09.103 10:47:12 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:32:09.103 10:47:12 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = nvme ']' 00:32:09.103 10:47:12 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = gpt ']' 00:32:09.103 10:47:12 blockdev_crypto_qat -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:32:09.103 10:47:12 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:09.103 10:47:12 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:09.103 10:47:12 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:09.103 ************************************ 00:32:09.103 START TEST bdev_fio 00:32:09.103 ************************************ 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:32:09.103 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:32:09.103 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram1]' 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram1 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:09.104 10:47:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:09.368 ************************************ 00:32:09.368 START TEST bdev_fio_rw_verify 00:32:09.368 ************************************ 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:09.368 10:47:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:09.626 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:09.626 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:09.626 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:09.626 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:09.626 fio-3.35 00:32:09.626 Starting 4 threads 00:32:24.492 00:32:24.492 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2517748: Thu Jul 25 10:47:25 2024 00:32:24.492 read: IOPS=25.1k, BW=98.2MiB/s (103MB/s)(982MiB/10001msec) 00:32:24.492 slat (usec): min=15, max=480, avg=53.82, stdev=26.71 00:32:24.492 clat (usec): min=22, max=1334, avg=302.29, stdev=173.77 00:32:24.492 lat (usec): min=43, max=1451, avg=356.10, stdev=185.46 00:32:24.492 clat percentiles (usec): 00:32:24.492 | 50.000th=[ 258], 99.000th=[ 791], 99.900th=[ 955], 99.990th=[ 1090], 00:32:24.492 | 99.999th=[ 1221] 00:32:24.492 write: IOPS=27.8k, BW=108MiB/s (114MB/s)(1056MiB/9739msec); 0 zone resets 00:32:24.492 slat (usec): min=25, max=1183, avg=64.34, stdev=26.99 00:32:24.492 clat (usec): min=22, max=1691, avg=338.64, stdev=185.51 00:32:24.492 lat (usec): min=71, max=1718, avg=402.97, stdev=196.76 00:32:24.492 clat percentiles (usec): 00:32:24.492 | 50.000th=[ 302], 99.000th=[ 865], 99.900th=[ 1004], 99.990th=[ 1123], 00:32:24.492 | 99.999th=[ 1532] 00:32:24.492 bw ( KiB/s): min=94008, max=124936, per=97.37%, avg=108106.95, stdev=1976.20, samples=76 00:32:24.493 iops : min=23502, max=31234, avg=27026.74, stdev=494.05, samples=76 00:32:24.493 lat (usec) : 50=0.01%, 100=5.50%, 250=37.51%, 500=39.93%, 750=14.37% 00:32:24.493 lat (usec) : 1000=2.59% 00:32:24.493 lat (msec) : 2=0.08% 00:32:24.493 cpu : usr=99.45%, sys=0.01%, ctx=65, majf=0, minf=272 00:32:24.493 IO depths : 1=4.3%, 2=27.4%, 4=54.7%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.493 complete : 0=0.0%, 4=88.0%, 8=12.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.493 issued rwts: total=251462,270311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.493 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:24.493 00:32:24.493 Run status group 0 (all jobs): 00:32:24.493 READ: bw=98.2MiB/s (103MB/s), 98.2MiB/s-98.2MiB/s (103MB/s-103MB/s), io=982MiB (1030MB), run=10001-10001msec 00:32:24.493 WRITE: bw=108MiB/s (114MB/s), 108MiB/s-108MiB/s (114MB/s-114MB/s), io=1056MiB (1107MB), run=9739-9739msec 00:32:24.493 00:32:24.493 real 0m13.489s 00:32:24.493 user 0m43.039s 00:32:24.493 sys 0m0.482s 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:32:24.493 ************************************ 00:32:24.493 END TEST bdev_fio_rw_verify 00:32:24.493 ************************************ 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "a953661c-df72-58fc-974d-1a1223873b23"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a953661c-df72-58fc-974d-1a1223873b23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "843b01dd-8d9f-5090-8713-1fcc81e1aef0"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "843b01dd-8d9f-5090-8713-1fcc81e1aef0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "a688f97a-f674-5fca-af94-329872e023e6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "a688f97a-f674-5fca-af94-329872e023e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "893b8e1d-cf44-5629-8d51-5e07e61a899c"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "893b8e1d-cf44-5629-8d51-5e07e61a899c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:32:24.493 crypto_ram1 00:32:24.493 crypto_ram2 00:32:24.493 crypto_ram3 ]] 00:32:24.493 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "a953661c-df72-58fc-974d-1a1223873b23"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a953661c-df72-58fc-974d-1a1223873b23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "843b01dd-8d9f-5090-8713-1fcc81e1aef0"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "843b01dd-8d9f-5090-8713-1fcc81e1aef0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "a688f97a-f674-5fca-af94-329872e023e6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "a688f97a-f674-5fca-af94-329872e023e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "893b8e1d-cf44-5629-8d51-5e07e61a899c"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "893b8e1d-cf44-5629-8d51-5e07e61a899c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram1]' 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram1 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:24.494 ************************************ 00:32:24.494 START TEST bdev_fio_trim 00:32:24.494 ************************************ 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:24.494 10:47:26 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:24.494 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:24.494 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:24.494 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:24.494 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:24.494 fio-3.35 00:32:24.494 Starting 4 threads 00:32:36.691 00:32:36.691 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2519415: Thu Jul 25 10:47:39 2024 00:32:36.691 write: IOPS=43.4k, BW=170MiB/s (178MB/s)(1697MiB/10001msec); 0 zone resets 00:32:36.691 slat (usec): min=16, max=1586, avg=54.90, stdev=27.52 00:32:36.691 clat (usec): min=35, max=1820, avg=201.87, stdev=101.02 00:32:36.691 lat (usec): min=64, max=1846, avg=256.77, stdev=113.38 00:32:36.691 clat percentiles (usec): 00:32:36.691 | 50.000th=[ 188], 99.000th=[ 498], 99.900th=[ 603], 99.990th=[ 676], 00:32:36.691 | 99.999th=[ 816] 00:32:36.691 bw ( KiB/s): min=161888, max=199496, per=100.00%, avg=174160.84, stdev=2065.00, samples=76 00:32:36.691 iops : min=40472, max=49874, avg=43540.21, stdev=516.25, samples=76 00:32:36.691 trim: IOPS=43.4k, BW=170MiB/s (178MB/s)(1697MiB/10001msec); 0 zone resets 00:32:36.691 slat (usec): min=5, max=134, avg=14.49, stdev= 5.09 00:32:36.691 clat (usec): min=38, max=1802, avg=205.81, stdev=116.30 00:32:36.691 lat (usec): min=45, max=1820, avg=220.29, stdev=116.92 00:32:36.691 clat percentiles (usec): 00:32:36.691 | 50.000th=[ 186], 99.000th=[ 553], 99.900th=[ 676], 99.990th=[ 775], 00:32:36.691 | 99.999th=[ 1074] 00:32:36.692 bw ( KiB/s): min=161888, max=199512, per=100.00%, avg=174162.11, stdev=2066.01, samples=76 00:32:36.692 iops : min=40472, max=49878, avg=43540.53, stdev=516.50, samples=76 00:32:36.692 lat (usec) : 50=1.38%, 100=15.31%, 250=55.73%, 500=25.95%, 750=1.62% 00:32:36.692 lat (usec) : 1000=0.01% 00:32:36.692 lat (msec) : 2=0.01% 00:32:36.692 cpu : usr=99.46%, sys=0.01%, ctx=77, majf=0, minf=122 00:32:36.692 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:36.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.692 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.692 issued rwts: total=0,434463,434465,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.692 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:36.692 00:32:36.692 Run status group 0 (all jobs): 00:32:36.692 WRITE: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=1697MiB (1780MB), run=10001-10001msec 00:32:36.692 TRIM: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=1697MiB (1780MB), run=10001-10001msec 00:32:36.692 00:32:36.692 real 0m13.525s 00:32:36.692 user 0m43.095s 00:32:36.692 sys 0m0.509s 00:32:36.692 10:47:39 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:36.692 10:47:39 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:32:36.692 ************************************ 00:32:36.692 END TEST bdev_fio_trim 00:32:36.692 ************************************ 00:32:36.692 10:47:39 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:32:36.692 10:47:39 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:36.692 10:47:39 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:32:36.692 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:32:36.692 10:47:39 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:32:36.692 00:32:36.692 real 0m27.238s 00:32:36.692 user 1m26.263s 00:32:36.692 sys 0m1.096s 00:32:36.692 10:47:39 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:36.692 10:47:39 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:36.692 ************************************ 00:32:36.692 END TEST bdev_fio 00:32:36.692 ************************************ 00:32:36.692 10:47:40 blockdev_crypto_qat -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:36.692 10:47:40 blockdev_crypto_qat -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:36.692 10:47:40 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:32:36.692 10:47:40 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:36.692 10:47:40 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:36.692 ************************************ 00:32:36.692 START TEST bdev_verify 00:32:36.692 ************************************ 00:32:36.692 10:47:40 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:36.692 [2024-07-25 10:47:40.096847] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:32:36.692 [2024-07-25 10:47:40.096903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520627 ] 00:32:36.692 [2024-07-25 10:47:40.176820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:36.692 [2024-07-25 10:47:40.297638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.692 [2024-07-25 10:47:40.297644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.692 [2024-07-25 10:47:40.319083] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:32:36.692 [2024-07-25 10:47:40.327118] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:36.692 [2024-07-25 10:47:40.335136] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:36.992 [2024-07-25 10:47:40.454735] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:32:39.516 [2024-07-25 10:47:42.707833] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:32:39.516 [2024-07-25 10:47:42.707937] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:39.516 [2024-07-25 10:47:42.707958] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:39.516 [2024-07-25 10:47:42.715849] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:32:39.516 [2024-07-25 10:47:42.715879] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:39.516 [2024-07-25 10:47:42.715898] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:39.517 [2024-07-25 10:47:42.723862] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:32:39.517 [2024-07-25 10:47:42.723885] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:39.517 [2024-07-25 10:47:42.723903] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:39.517 [2024-07-25 10:47:42.731882] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:32:39.517 [2024-07-25 10:47:42.731904] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:39.517 [2024-07-25 10:47:42.731922] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:39.517 Running I/O for 5 seconds... 00:32:44.777 00:32:44.777 Latency(us) 00:32:44.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.777 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:44.777 Verification LBA range: start 0x0 length 0x1000 00:32:44.777 crypto_ram : 5.06 581.45 2.27 0.00 0.00 219788.19 8446.86 136703.24 00:32:44.777 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:44.777 Verification LBA range: start 0x1000 length 0x1000 00:32:44.777 crypto_ram : 5.06 581.47 2.27 0.00 0.00 219760.43 8446.86 136703.24 00:32:44.777 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:44.777 Verification LBA range: start 0x0 length 0x1000 00:32:44.777 crypto_ram1 : 5.07 581.17 2.27 0.00 0.00 219286.02 9660.49 126605.84 00:32:44.777 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:44.777 Verification LBA range: start 0x1000 length 0x1000 00:32:44.777 crypto_ram1 : 5.07 581.20 2.27 0.00 0.00 219286.30 9611.95 126605.84 00:32:44.777 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:44.777 Verification LBA range: start 0x0 length 0x1000 00:32:44.777 crypto_ram2 : 5.05 4551.73 17.78 0.00 0.00 27911.08 4053.52 20777.34 00:32:44.777 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:44.777 Verification LBA range: start 0x1000 length 0x1000 00:32:44.777 crypto_ram2 : 5.05 4551.85 17.78 0.00 0.00 27908.88 4150.61 20583.16 00:32:44.777 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:44.777 Verification LBA range: start 0x0 length 0x1000 00:32:44.777 crypto_ram3 : 5.06 4556.95 17.80 0.00 0.00 27850.03 3665.16 20194.80 00:32:44.777 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:44.777 Verification LBA range: start 0x1000 length 0x1000 00:32:44.777 crypto_ram3 : 5.06 4557.09 17.80 0.00 0.00 27850.78 3665.16 20486.07 00:32:44.777 =================================================================================================================== 00:32:44.777 Total : 20542.93 80.25 0.00 0.00 49620.97 3665.16 136703.24 00:32:44.777 00:32:44.777 real 0m8.367s 00:32:44.777 user 0m15.834s 00:32:44.777 sys 0m0.410s 00:32:44.777 10:47:48 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:44.777 10:47:48 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:32:44.777 ************************************ 00:32:44.777 END TEST bdev_verify 00:32:44.777 ************************************ 00:32:44.777 10:47:48 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:44.777 10:47:48 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:32:44.777 10:47:48 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:44.777 10:47:48 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:44.777 ************************************ 00:32:44.777 START TEST bdev_verify_big_io 00:32:44.777 ************************************ 00:32:44.777 10:47:48 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:45.034 [2024-07-25 10:47:48.511573] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:32:45.034 [2024-07-25 10:47:48.511632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521689 ] 00:32:45.034 [2024-07-25 10:47:48.592670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:45.034 [2024-07-25 10:47:48.716765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.034 [2024-07-25 10:47:48.716770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.034 [2024-07-25 10:47:48.738183] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:32:45.291 [2024-07-25 10:47:48.746203] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:45.291 [2024-07-25 10:47:48.754215] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:45.291 [2024-07-25 10:47:48.863205] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:32:47.814 [2024-07-25 10:47:51.115001] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:32:47.814 [2024-07-25 10:47:51.115115] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:47.814 [2024-07-25 10:47:51.115137] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:47.814 [2024-07-25 10:47:51.123015] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:32:47.814 [2024-07-25 10:47:51.123045] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:47.814 [2024-07-25 10:47:51.123064] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:47.814 [2024-07-25 10:47:51.131037] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:32:47.814 [2024-07-25 10:47:51.131065] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:47.814 [2024-07-25 10:47:51.131089] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:47.814 [2024-07-25 10:47:51.139058] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:32:47.814 [2024-07-25 10:47:51.139086] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:47.814 [2024-07-25 10:47:51.139116] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:47.814 Running I/O for 5 seconds... 00:32:48.381 [2024-07-25 10:47:52.034047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.381 [2024-07-25 10:47:52.034468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.381 [2024-07-25 10:47:52.034560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.381 [2024-07-25 10:47:52.034617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.381 [2024-07-25 10:47:52.034670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.381 [2024-07-25 10:47:52.034727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.381 [2024-07-25 10:47:52.035064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.381 [2024-07-25 10:47:52.035091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.381 [2024-07-25 10:47:52.038083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.381 [2024-07-25 10:47:52.038178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.381 [2024-07-25 10:47:52.038223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.038266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.038659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.038718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.038771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.038823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.039218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.039242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.041892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.041955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.042010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.042064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.042493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.042552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.042605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.042657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.043053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.043080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.045734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.045793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.045847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.045901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.046316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.046364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.046407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.046471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.046829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.046854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.049459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.049519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.049574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.049632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.050070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.050156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.050202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.050252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.050612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.050639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.053307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.053356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.053416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.053482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.053935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.053993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.054046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.054099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.054474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.054507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.057034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.057095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.057156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.057209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.057657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.057714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.057768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.057821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.058154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.058192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.060831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.060890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.060944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.060999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.061384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.061468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.061527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.061580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.061891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.061918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.064467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.064527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.064589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.064647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.065120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.065202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.065254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.065303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.065658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.065684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.068314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.068363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.068444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.068518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.068937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.068995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.069048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.069108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.069487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.069513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.072040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.072100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.072182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.072229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.382 [2024-07-25 10:47:52.072666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.072724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.072778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.072830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.073219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.073242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.075939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.075999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.076053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.076115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.076512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.076571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.076625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.076683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.077063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.077090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.079578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.079637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.079691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.079745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.080172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.080220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.080276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.080336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.080711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.080739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.083351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.083416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.083477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.083532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.083955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.084014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.084066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.084128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.084493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.084534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.087165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.087239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.087286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.087340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.087789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.087859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.087918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.087971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.088293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.383 [2024-07-25 10:47:52.088317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.091112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.091197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.091250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.091300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.091689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.091753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.091807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.091860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.092262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.092286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.094875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.094955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.095014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.095068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.095490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.095550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.095604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.095656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.096029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.096057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.098429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.098490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.098545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.098598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.099025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.099083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.099156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.099201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.099535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.099562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.101944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.102011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.102066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.102128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.102530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.102589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.102643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.102694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.103065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.103092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.105543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.105605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.105659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.105712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.106154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.106203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.106248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.106291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.106614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.106641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.109013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.109072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.109149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.109196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.109594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.109658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.109719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.109780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.110155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.110178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.112581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.112646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.112712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.112775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.113199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.113253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.113302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.113345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.113690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.113717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.115994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.116054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.644 [2024-07-25 10:47:52.116116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.116186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.116655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.116712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.116764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.116816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.117189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.117212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.119572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.119641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.119696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.119750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.120195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.120244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.120288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.120331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.120672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.120699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.122956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.123016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.123075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.123138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.123543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.123601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.123654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.123706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.124085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.124118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.125959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.126018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.126072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.126150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.126415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.126477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.126541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.126594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.126857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.126883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.128954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.129013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.129068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.129144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.129536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.129595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.129648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.129702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.129965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.129992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.131821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.131881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.131935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.131994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.132300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.132351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.132416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.132469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.132760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.132787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.135083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.135167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.135213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.135257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.135585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.135644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.135697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.135749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.136013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.136039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.137785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.137845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.137898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.137952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.138355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.138422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.138476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.138528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.138913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.138940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.140961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.141020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.141078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.141154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.141446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.141505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.141558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.141610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.141937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.645 [2024-07-25 10:47:52.141964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.143659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.143719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.143773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.143819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.144243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.144296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.144345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.144391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.144792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.144819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.148182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.149730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.151247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.153086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.153685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.154040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.154366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.154735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.155149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.155173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.158382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.160077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.161960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.163555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.164266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.164615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.164970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.165405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.165678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.165704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.169049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.170846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.172311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.172676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.173371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.173741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.174300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.175660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.175934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.175961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.179330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.180794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.181170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.181463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.182182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.182812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.184127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.185810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.186083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.186116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.188991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.189328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.189683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.190037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.191235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.192553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.194230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.195925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.196293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.196317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.198276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.198625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.198980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.199317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.200924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.202602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.204305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.204848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.205170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.205193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.207273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.207622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.207976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.209826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.211767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.213474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.214228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.215549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.215826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.215852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.218251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.218599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.646 [2024-07-25 10:47:52.220443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.222321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.224460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.225486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.226833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.228524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.228803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.228829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.231290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.232666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.234348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.236050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.237794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.239170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.240832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.242710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.243048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.243075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.246427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.248176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.249893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.250344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.252459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.254203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.255673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.256029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.256428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.256468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.259999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.261819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.262710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.264032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.266050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.267074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.267434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.267802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.268233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.268257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.271706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.272982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.274313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.276037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.276919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.277271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.277604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.277965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.278339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.278362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.281333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.282544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.284249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.286092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.286767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.287128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.287434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.287803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.288081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.288115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.290982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.292666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.294518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.294873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.295610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.295965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.296325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.297711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.297991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.298016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.301229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.303117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.303447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.303801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.304507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.305156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.306413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.308147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.308380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.308420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.311674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.312032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.647 [2024-07-25 10:47:52.312366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.312732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.314414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.315756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.317450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.319258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.319589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.319616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.321511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.321867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.322237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.322575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.324194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.325884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.327728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.328642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.328972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.328998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.331010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.331355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.331719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.333439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.335399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.337255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.338349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.339693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.339974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.340000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.342187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.342519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.344031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.345701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.347559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.349010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.648 [2024-07-25 10:47:52.350499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.352056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.352333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.352357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.355208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.356522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.358238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.359966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.362030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.363769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.365458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.366454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.366872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.366899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.370417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.372274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.373845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.375241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.377232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.378911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.379382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.379756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.380175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.380198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.383533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.384869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.386501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.388074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.390051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.390380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.390746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.391109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.391500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.391528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.394286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.396026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.397670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.399539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.400178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.400505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.400858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.401219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.401542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.401569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.404861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.406550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.408411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.409916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.410598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.410956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.411300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.412226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.412550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.412576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.415584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.417410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.419172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.419502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.420233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.420575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.421612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.422934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.423228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.423251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.425851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.426216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.426542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.426899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.427629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.427987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.428326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.428685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.429070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.429113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.431305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.431653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.432007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.432344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.433026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.910 [2024-07-25 10:47:52.433359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.433726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.434080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.434411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.434440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.436637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.436996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.437339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.437704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.438424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.438780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.439144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.439494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.439956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.439983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.442270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.442625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.442989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.443326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.444083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.444408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.444780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.445159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.445534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.445568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.447729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.448084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.448426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.448781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.449463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.449824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.450190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.450524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.450942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.450968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.453255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.453589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.453947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.454002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.454710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.455067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.455406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.455761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.456167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.456189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.458335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.458692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.459051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.459400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.459461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.459863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.460232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.460572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.460928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.461286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.461728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.461755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.463857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.463919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.463980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.464040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.464374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.464475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.464533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.464588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.464643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.465033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.465059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.467010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.467080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.467156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.467214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.467578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.467651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.467707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.467761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.467815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.468216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.468242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.470121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.470189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.470239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.470285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.470689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.470759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.470822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.470877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.470931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.471289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.471313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.473222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.911 [2024-07-25 10:47:52.473272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.473317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.473363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.473766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.473844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.473900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.473954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.474007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.474365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.474389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.476325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.476375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.476434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.476481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.476894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.476969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.477026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.477082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.477158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.477475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.477502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.479372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.479438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.479494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.479549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.479943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.480013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.480077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.480154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.480219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.480648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.480674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.482677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.482738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.482793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.482847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.483183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.483245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.483293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.483345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.483404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.483836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.483863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.485854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.485920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.485986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.486041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.486372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.486462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.486519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.486573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.486626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.487022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.487049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.488980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.489050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.489125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.489204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.489517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.489595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.489651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.489706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.489759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.490169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.490193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.492080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.492159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.492206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.492251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.492631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.492702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.492758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.492813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.492866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.493263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.493286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.495193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.495243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.495288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.495333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.495691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.495762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.495819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.495872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.912 [2024-07-25 10:47:52.495925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.496300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.496328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.498257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.498314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.498365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.498412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.498819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.498889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.498945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.499000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.499054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.499369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.499411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.501345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.501414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.501470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.501526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.501917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.501997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.502058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.502120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.502188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.502490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.502518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.504575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.504640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.504702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.504763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.505169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.505234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.505281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.505325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.505390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.505786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.505813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.507687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.507757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.507813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.507867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.508207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.508269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.508315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.508360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.508440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.508794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.508821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.510676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.510735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.510793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.510851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.511133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.511213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.511264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.511309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.511353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.511632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.511658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.513169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.513231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.513275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.513320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.513647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.513730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.513787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.513841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.513893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.514271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.514294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.516038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.516097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.516174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.516222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.516461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.516543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.516599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.516653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.516705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.517079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.517113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.518538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.518597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.518652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.518706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.519100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.519190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.519237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.519283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.519328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.519710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.519737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.521354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.913 [2024-07-25 10:47:52.521407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.521477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.521538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.521862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.521933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.521991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.522044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.522098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.522369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.522391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.524030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.524089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.524165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.524212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.524563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.524638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.524694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.524748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.524801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.525203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.525226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.526639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.526699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.526753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.526808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.527079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.527167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.527216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.527260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.527304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.527567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.527593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.529514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.529577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.529631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.529685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.530069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.530162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.530210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.530255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.530305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.530567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.530593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.532108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.532188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.532241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.532286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.532567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.532639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.532695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.532748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.532801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.533068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.533094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.535082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.535165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.535212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.535258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.535568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.535642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.535698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.535757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.535810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.536082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.536117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.537622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.537681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.537735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.537792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.538062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.914 [2024-07-25 10:47:52.538162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.538212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.538257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.538303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.538666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.538692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.540811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.540872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.540926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.540979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.541286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.541345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.541407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.541484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.541538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.541804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.541830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.543364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.543446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.543505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.543558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.543826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.543899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.543961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.544015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.544067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.544437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.544479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.546335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.546409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.546490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.546543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.546809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.546882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.546937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.546990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.547042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.547310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.547333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.548923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.548981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.549034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.549086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.549460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.549531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.549586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.549638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.549690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.550068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.550094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.551819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.551878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.553629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.553699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.553970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.554038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.554095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.554175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.554221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.554474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.554501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.556039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.556097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.556173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.556506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.556884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.556951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.557007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.557060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.557122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.557475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.557502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.560189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.561517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.563233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.565003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.565432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.565808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.566182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.566520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.567361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.567673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.567700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.570415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.572161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.573939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.574287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.574669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.575036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.915 [2024-07-25 10:47:52.575382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.576310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.577649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.577918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.577943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.581072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.582896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.583251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.583595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.583934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.584287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.585133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.586443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.588128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.588369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.588404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.591642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.591998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.592360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.592725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.593122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.594075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.595375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.597100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.598792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.599232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.599259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.600974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.601313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.601672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.602027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.602318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.603671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.605337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.607050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.607604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.607904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.607930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.609769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.610132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.610470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.612058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.612370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:48.916 [2024-07-25 10:47:52.614119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.615875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.616919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.618223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.618481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.618509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.620679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.621035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.622757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.624497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.624773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.626419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.628044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.629610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.631469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.631750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.631776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.634556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.635867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.637536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.639249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.639643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.641373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.643206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.644909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.646076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.646438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.646465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.177 [2024-07-25 10:47:52.649587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.651304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.653182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.654177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.654466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.656192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.657956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.658681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.659036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.659449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.659476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.663059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.664883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.666125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.667450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.667740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.669508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.669864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.670229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.670551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.670958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.670984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.673946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.675478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.676957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.678551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.678843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.679219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.679574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.679927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.680295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.680652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.680678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.683377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.684743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.686478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.688338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.688613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.688932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.689255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.689614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.690611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.690938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.690964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.693832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.695563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.697419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.697750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.698163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.698485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.698869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.699960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.701297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.701541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.701562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.704722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.706564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.706881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.707195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.707497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.707833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.708727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.710025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.711736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.711989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.712009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.715340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.715647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.715936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.716231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.716541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.717450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.718762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.720190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.721876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.722210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.722233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.723930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.724262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.724552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.724840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.725149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.726472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.728150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.729890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.730492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.730769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.730790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.732564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.732884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.733179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.178 [2024-07-25 10:47:52.734723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.735022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.736641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.738294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.738758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.740055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.740323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.740345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.742296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.742606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.744037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.745338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.745564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.747275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.747652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.749059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.750798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.751058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.751078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.753236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.754780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.756213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.757927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.758207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.758770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.760117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.761828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.763493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.763805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.763826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.767505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.769231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.771049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.772580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.772873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.774192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.775905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.777547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.777885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.778248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.778269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.781749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.783570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.783890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.785553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.785806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.787519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.787836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.788136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.788426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.788752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.788788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.790868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.791213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.791524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.791849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.792238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.792537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.792827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.793123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.793413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.793731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.793752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.795994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.796344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.796655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.796984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.797309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.797613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.797943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.798289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.798605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.798940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.798960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.801186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.801481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.801768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.802056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.802429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.802757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.803050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.803371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.803685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.804055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.804081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.806288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.806579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.806867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.807192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.807549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.807875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.179 [2024-07-25 10:47:52.808182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.808481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.808810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.809131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.809157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.811115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.811426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.811731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.812061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.812488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.812841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.813207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.813548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.813903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.814286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.814309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.816758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.817146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.817492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.817877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.818313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.818653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.819029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.819405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.819734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.820140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.820186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.822405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.822736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.823090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.823439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.823827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.824227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.824582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.824955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.825317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.825679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.825720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.827963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.828317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.828636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.828977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.829331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.829657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.829985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.830313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.830641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.830961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.830986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.833323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.833646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.833951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.834314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.834686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.835028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.835358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.835706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.836008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.836398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.836436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.838640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.838971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.839022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.839326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.839681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.840016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.840325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.840628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.840964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.841300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.841323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.843457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.843796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.844096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.844162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.844509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.844845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.845160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.845459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.845805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.846132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.846163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.847982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.848041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.848111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.848169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.848500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.848608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.848655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.848701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.848746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.849092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.180 [2024-07-25 10:47:52.849127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.851015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.851073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.851143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.851205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.851592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.851675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.851722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.851767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.851811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.852128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.852150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.854029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.854087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.854157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.854219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.854602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.854690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.854738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.854788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.854848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.855149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.855171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.857142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.857209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.857255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.857299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.857592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.857653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.857700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.857745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.857795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.858143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.858173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.860255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.860315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.860373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.860419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.860650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.860714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.860763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.860809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.860858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.861094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.861121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.862950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.863008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.863072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.863145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.863461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.863546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.863609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.863659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.863705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.863995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.864016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.866194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.866244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.866289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.866333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.866585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.866645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.866692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.866738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.866783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.867015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.867035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.868365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.868437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.868504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.868549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.868782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.868842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.868892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.868939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.868985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.869313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.869334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.871177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.871233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.871280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.871326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.871558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.871623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.871671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.871716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.871761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.871991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.872012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.873530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.873588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.873650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.181 [2024-07-25 10:47:52.873695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.874051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.874141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.874209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.874255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.874302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.874651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.874677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.876466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.876523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.876587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.876632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.876862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.876923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.876974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.877019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.877064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.877369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.877393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.878802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.878863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.878935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.878997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.879445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.879535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.879602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.879651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.879698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.879998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.880042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.881875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.881951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.882013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.882081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.882341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.882424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.882474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.882521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.882568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.882835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.182 [2024-07-25 10:47:52.882870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.884616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.884678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.884743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.884790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.885149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.885219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.885267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.885319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.885364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.885682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.885720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.887009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.887058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.887128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.887181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.887485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.887567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.887630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.887674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.887721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.887961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.887981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.889662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.889721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.889789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.889835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.890151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.890211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.890258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.890304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.890350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.890662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.890699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.892131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.892200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.892247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.892297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.892567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.892685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.892739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.892788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.892836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.893166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.893192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.894950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.895010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.895074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.895141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.895501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.443 [2024-07-25 10:47:52.895571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.895634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.895681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.895726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.895977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.895998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.897477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.897535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.897601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.897647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.897878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.897938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.897986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.898031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.898077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.898335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.898358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.900281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.900332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.900388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.900436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.900704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.900765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.900823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.900900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.900964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.901225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.901248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.902700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.902757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.902822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.902867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.903100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.903171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.903222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.903267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.903313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.903572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.903593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.905542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.905600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.905665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.905710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.905977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.906039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.906085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.906138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.906184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.906417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.906442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.907908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.907966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.908029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.908074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.908312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.908373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.908420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.908468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.908515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.908810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.908831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.910724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.910782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.910852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.910900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.911144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.911204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.444 [2024-07-25 10:47:52.911255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.911300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.911346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.911576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.911596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.913046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.913133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.913181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.913226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.913493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.913557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.913604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.913657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.913708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.914068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.914093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.915908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.915968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.916031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.916076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.916330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.916394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.916458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.916503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.916552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.916784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.916805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.918283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.918341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.918404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.918452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.918738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.918796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.918843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.918889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.918934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.919262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.919284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.920977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.921036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.921113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.921177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.921409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.921490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.921537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.921583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.921628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.921927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.921948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.923394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.923453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.923516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.923572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.923934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.924008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.924071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.924136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.924183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.924490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.924528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.926210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.926258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.926302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.926347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.926579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.926639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.445 [2024-07-25 10:47:52.926688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.926734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.926779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.927014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.927034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.928343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.928410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.928711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.928787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.929178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.929239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.929286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.929332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.929377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.929720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.929745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.931183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.931233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.931279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.933148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.933399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.933458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.933519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.933566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.933611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.933843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.933863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.936246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.937790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.939454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.941133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.941437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.942909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.944282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.945963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.947868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.948185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.948207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.951323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.952995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.954697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.955329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.955565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.957216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.959077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.960774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.961099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.961447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.961473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.964966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.966647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.967083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.968527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.968792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.970454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.971849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.972182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.972482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.972845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.972866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.976084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.976545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.978000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.979716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.979980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.981361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.981700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.446 [2024-07-25 10:47:52.981999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.982304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.982665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.982687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.984625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.986160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.987879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.989576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.989880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.990195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.990495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.990793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.991092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.991339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.991360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.994353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.996004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.997715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.998721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.999066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.999406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:52.999763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.000063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.001612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.001900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.001921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.005143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.006845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.007816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.008152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.008499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.008833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.009139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.011026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.012876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.013147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.013169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.016311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.017066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.017423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.017723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.018109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.018437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.020205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.021861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.023729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.023994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.024015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.026335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.026641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.026939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.027243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.027591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.029488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.031348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.033033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.034336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.034578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.034599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.036375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.036715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.037018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.037360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.037620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.039216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.041047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.042754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.043962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.044274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.447 [2024-07-25 10:47:53.044295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.046288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.046588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.046987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.048339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.048579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.050225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.051583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.053129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.054607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.054867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.054889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.056905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.057241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.058755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.060419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.060682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.062209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.063592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.064906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.066593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.066852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.066873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.069599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.070926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.072596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.074225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.074511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.076346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.078223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.079909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.081299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.081607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.081628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.084688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.086359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.088045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.088600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.088861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.090674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.092472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.094016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.094367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.094736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.094761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.098145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.099881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.100510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.101833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.102095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.103802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.104895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.105228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.105526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.105862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.105888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.108364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.110230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.112066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.113978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.114364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.114697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.115034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.115362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.116464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.116750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.116770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.119641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.121403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.123295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.123596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.124025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.124398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.124730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.125028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.125333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.125648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.125685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.128222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.128527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.448 [2024-07-25 10:47:53.128826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.129170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.129474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.129820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.130126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.130425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.130767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.131148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.131188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.133339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.133640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.133938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.134292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.134643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.134978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.135286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.135584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.135880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.136247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.136286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.138425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.138760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.139122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.139452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.139795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.140113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.140410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.140707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.141037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.141352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.141375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.143599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.143927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.144237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.144535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.144868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.145417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.145795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.146150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.146497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.146833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.449 [2024-07-25 10:47:53.146854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.149244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.149641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.150006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.150350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.150688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.151036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.151345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.151648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.151983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.152306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.152347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.154630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.154956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.155261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.155559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.155885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.714 [2024-07-25 10:47:53.156200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.156500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.156796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.157175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.157509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.157543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.159683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.160047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.160400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.160786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.161184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.161566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.161901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.162208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.162510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.162904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.162940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.165286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.165590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.165892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.166235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.166559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.166909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.167216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.167518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.167818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.168231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.168272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.170483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.170815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.171121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.171428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.171813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.172204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.172504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.172803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.173099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.173425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.173463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.175655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.176030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.176377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.176717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.177120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.177461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.177759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.178119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.178436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.178796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.178817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.181060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.181421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.181722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.182021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.182348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.182656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.182991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.183296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.183598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.183945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.183966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.186207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.186513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.186815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.187158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.187474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.187820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.188129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.188443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.188787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.189147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.189169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.191343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.191652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.191950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.192293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.192647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.192983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.193293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.193595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.193935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.194238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.194260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.197386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.197729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.197780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.198079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.198411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.198737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.199075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.199439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.199780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.715 [2024-07-25 10:47:53.200141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.200164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.203573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.205266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.206076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.206146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.206404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.208278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.209961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.211319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.211624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.211968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.211993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.213795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.213854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.213921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.213967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.214213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.214273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.214321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.214366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.214412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.214707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.214728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.216141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.216209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.216255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.216305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.216682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.216754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.216816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.216861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.216907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.217261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.217283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.218940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.218999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.219063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.219139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.219392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.219468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.219517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.219563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.219610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.219846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.219867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.221324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.221373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.221427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.221473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.221817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.221886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.221948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.221998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.222044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.222399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.222421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.223975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.224034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.224098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.224180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.224508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.224605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.224674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.224719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.224764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.225013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.225034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.226755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.226814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.226883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.226930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.227232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.227293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.227341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.227387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.227434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.227756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.227790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.229339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.229397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.229463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.229509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.229775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.229838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.229885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.229930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.229976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.230219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.230241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.232087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.232158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.716 [2024-07-25 10:47:53.232223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.232269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.232614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.232683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.232746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.232791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.232844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.233078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.233100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.234695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.234753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.234819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.234874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.235116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.235182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.235230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.235276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.235322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.235556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.235576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.237470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.237530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.237595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.237640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.237916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.237977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.238025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.238071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.238140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.238384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.238405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.239955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.240013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.240077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.240154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.240440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.240504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.240551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.240597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.240648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.240945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.240966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.243112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.243186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.243236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.243282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.243548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.243609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.243656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.243701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.243747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.243981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.244002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.245564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.245622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.245686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.245734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.245967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.246031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.246079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.246131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.246178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.246538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.246564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.248381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.248454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.248518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.248565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.248801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.248861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.248913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.248959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.249005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.249245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.249266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.250835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.250893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.250957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.251002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.251418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.251506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.251554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.251598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.251644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.251997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.252023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.253757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.253816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.253884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.253934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.254179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.254240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.717 [2024-07-25 10:47:53.254288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.254334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.254380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.254753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.254778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.256311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.256371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.256435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.256491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.256855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.256937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.257002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.257049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.257095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.257398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.257419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.259056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.259122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.259188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.259238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.259498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.259560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.259608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.259655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.259702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.259936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.259956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.261516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.261576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.261641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.261686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.262039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.262129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.262199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.262251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.262299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.262675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.262701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.264177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.264233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.264282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.264328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.264638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.264717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.264781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.264825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.264870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.265143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.265165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.266725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.266784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.266848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.266893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.267294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.267359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.267414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.267460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.267506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.267814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.267853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.269260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.269310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.269356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.269403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.269638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.269697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.269746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.269792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.269837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.270126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.270165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.272001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.272059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.272130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.272195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.272559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.272645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.272694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.272740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.272787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.273022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.273042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.274525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.274583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.274652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.274699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.274938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.274999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.275047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.275093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.275170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.718 [2024-07-25 10:47:53.275410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.275446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.277468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.277527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.277592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.277638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.277909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.277972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.278020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.278073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.278143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.278385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.278421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.279914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.279972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.280038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.280082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.280350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.280427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.280479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.280527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.280572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.280844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.280865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.282800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.282859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.282924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.282969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.283246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.283306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.283353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.283399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.283446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.283680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.283700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.285196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.285246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.285295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.285341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.285575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.285640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.285689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.285737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.285783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.286092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.286118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.287873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.287931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.287997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.288055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.288329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.288408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.288482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.288528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.288574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.288810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.288830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.290341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.290390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.290435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.290479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.290830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.290932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.290995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.719 [2024-07-25 10:47:53.291044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.291090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.291469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.291510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.293265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.293315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.295185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.295240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.295498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.295559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.295637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.295702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.295749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.296043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.296068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.297768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.297827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.297891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.298223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.298636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.298708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.298770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.298815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.298860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.299173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.299195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.302608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.304461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.306143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.307385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.307712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.308023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.308375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.308692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.309981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.310307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.310329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.313737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.315385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.316690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.317020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.317356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.317702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.318001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.319129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.320452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.320714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.320735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.323952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.325330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.325635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.325955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.326305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.326615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.327787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.329065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.330784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.331048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.331069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.334081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.334417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.334717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.335055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.335434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.336556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.337869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.339522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.341232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.341527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.341548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.343340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.343641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.343941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.344286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.344545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.345831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.347486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.349183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.349808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.350117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.350139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.352054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.352480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.352822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.354643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.354908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.356731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.358462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.359640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.360956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.361227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.720 [2024-07-25 10:47:53.361248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.363383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.363724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.365265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.366941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.367211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.368780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.370219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.371508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.373195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.373429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.373449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.375728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.377169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.378858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.380523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.380811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.382444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.383963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.385768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.387482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.387785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.387806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.390942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.392593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.394284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.394649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.394889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.396787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.398448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.399785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.400119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.400470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.400496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.403927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.405611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.406176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.407502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.407767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.409452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.410548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.410877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.411182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.411487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.411507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.414809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.721 [2024-07-25 10:47:53.416422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.417777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.419576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.419855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.420177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.420517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.420818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.421159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.421477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.421515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.424786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.426456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.426786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.427163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.427509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.427854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.428161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.429967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.431780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.432056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.432109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.435483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.435932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.436267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.436566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.436916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.437283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.437586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.437925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.438237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.438620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.438644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.440848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.441182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.441484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.441785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.442094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.442451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.442753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.443053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.443383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.443757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.443778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.982 [2024-07-25 10:47:53.446046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.446403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.446723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.447025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.447352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.447699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.447999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.448364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.448699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.448998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.449019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.451285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.451592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.451896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.452242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.452565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.452912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.453218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.453520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.453866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.454177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.454199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.456455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.456785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.457088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.457393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.457779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.458179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.458495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.458826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.459125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.459447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.459487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.461616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.461936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.462234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.462524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.462847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.463153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.463447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.463780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.464153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.464530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.464556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.466744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.467069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.467388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.467746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.468024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.468329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.468617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.468905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.469199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.469481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.469502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.471567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.471944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.472308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.472654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.472945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.473297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.473654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.475579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.475953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.476374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.476413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.478553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.480263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.480554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.480844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.481244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.481569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.483431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.483752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.484108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.484355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.484375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.488124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.488438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.488727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.490308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.490647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.490997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.491296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.491587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.493454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.493813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.493834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.496032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.496390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.498054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.498391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.983 [2024-07-25 10:47:53.498726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.500224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.500524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.500820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.501133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.501417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.501438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.503566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.503888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.504189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.504479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.504720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.505500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.505819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.507013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.507495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.507859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.507881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.509862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.511096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.511546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.511879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.512224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.512526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.513466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.514244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.514533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.514788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.514808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.517349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.518228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.518519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.519579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.519875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.520239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.520532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.520823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.521525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.521786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.521806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.523929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.524258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.524809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.525973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.526370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.526769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.528117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.528430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.528721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.529035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.529056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.532517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.532840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.533137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.534961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.535306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.537152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.537995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.538931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.539278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.539557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.539579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.541675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.541994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.542291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.542580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.542916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.543301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.543623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.543968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.544270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.544627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.544653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.547599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.549189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.550663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.552446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.552725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.553027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.553368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.553676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.553965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.554297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.554319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.557337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.558823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.558885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.560660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.560965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.561275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.561565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.561854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.984 [2024-07-25 10:47:53.562150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.562402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.562422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.565382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.567092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.568814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.568874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.569184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.569490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.569828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.570130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.570422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.570687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.570707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.572243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.572290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.572338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.572381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.572607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.572665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.572712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.572756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.572800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.573075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.573099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.575115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.575183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.575228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.575272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.575538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.575595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.575640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.575684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.575731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.575955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.575975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.577507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.577566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.577628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.577684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.577910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.577968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.578014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.578057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.578107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.578427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.578447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.580672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.580730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.580792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.580836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.581079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.581141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.581186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.581230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.581273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.581499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.581519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.583075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.583140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.583206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.583252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.583499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.583561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.583607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.583652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.583698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.584057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.584083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.585958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.586024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.586086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.586152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.586381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.586454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.586500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.586544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.586591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.586814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.586835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.588357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.588405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.588451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.588494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.588770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.588829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.588874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.588919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.588962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.589290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.589311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.590959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.985 [2024-07-25 10:47:53.591018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.591083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.591150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.591382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.591454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.591500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.591545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.591592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.591882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.591926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.593510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.593568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.593630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.593673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.594001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.594073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.594143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.594187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.594232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.594571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.594598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.596195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.596243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.596287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.596329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.596596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.596655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.596701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.596745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.596788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.597034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.597054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.598618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.598676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.598740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.598783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.599114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.599194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.599242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.599293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.599338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.599699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.599724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.601148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.601214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.601262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.601306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.601539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.601597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.601649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.601693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.601736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.601959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.601979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.603864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.603927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.603988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.604031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.604339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.604436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.604483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.604526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.604570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.604799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.604819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.606335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.606382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.606430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.606477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.606704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.606776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.606822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.606867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.606910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.607140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.607161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.609134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.609200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.609244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.609287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.609545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.609601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.609647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.609691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.609737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.609966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.609986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.611464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.611522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.986 [2024-07-25 10:47:53.611587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.611635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.611859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.611918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.611963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.612007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.612050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.612361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.612400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.614790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.614851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.614919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.614963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.615196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.615256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.615302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.615347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.615393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.615669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.615689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.619497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.619566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.619631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.619674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.619989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.620070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.620138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.620199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.620245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.620490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.620510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.624308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.624357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.624400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.624443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.624666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.624722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.624770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.624815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.624859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.625218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.625239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.628504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.628563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.628637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.628681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.628904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.628960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.629006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.629051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.629096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.629328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.629348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.633121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.633193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.633237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.633280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.633585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.633670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.633731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.633774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.633818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.634149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.634170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.638762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.638820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.638883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.638926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.639213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.639270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.639316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.639361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.639410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.639746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.639772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.642943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.643007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.643071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.643137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.987 [2024-07-25 10:47:53.643383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.643456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.643501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.643548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.643599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.643822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.643843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.647816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.647875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.647943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.647987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.648253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.648313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.648358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.648401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.648445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.648669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.648689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.653734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.653798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.653861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.653905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.654237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.654297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.654347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.654392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.654436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.654750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.654790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.658439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.658514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.658575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.658620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.658845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.658901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.658951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.658996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.659039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.659268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.659289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.662417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.662479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.662543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.662586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.662807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.662864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.662910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.662954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.662997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.663230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.663251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.667355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.667404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.667450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.667499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.667782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.667837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.667883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.667927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.667971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.668292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.668314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.673366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.673435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.673500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.673547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.673767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.673829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.673897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.673960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.674004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.674299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.674321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.676699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.676762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.676823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.676866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.677087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.677150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.677195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.677239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.677284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.677565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.677585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.681635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.681701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.682006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.682053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.682391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.682464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.682510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.682554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.988 [2024-07-25 10:47:53.682597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.989 [2024-07-25 10:47:53.682845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.989 [2024-07-25 10:47:53.682865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.989 [2024-07-25 10:47:53.684485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.989 [2024-07-25 10:47:53.684568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.989 [2024-07-25 10:47:53.684660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.989 [2024-07-25 10:47:53.686421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.989 [2024-07-25 10:47:53.686761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.989 [2024-07-25 10:47:53.686831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.989 [2024-07-25 10:47:53.686880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.989 [2024-07-25 10:47:53.686928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.989 [2024-07-25 10:47:53.686976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.989 [2024-07-25 10:47:53.687415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:49.989 [2024-07-25 10:47:53.687445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.249 [2024-07-25 10:47:53.691272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.249 [2024-07-25 10:47:53.692948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.249 [2024-07-25 10:47:53.694240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.249 [2024-07-25 10:47:53.695904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.249 [2024-07-25 10:47:53.696189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.249 [2024-07-25 10:47:53.697869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.249 [2024-07-25 10:47:53.699547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.699922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.700344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.700721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.700771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.704124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.704915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.706601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.708294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.708519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.710164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.710456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.710746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.711035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.711361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.711401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.713291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.714670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.716377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.718041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.718331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.718651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.718980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.719276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.719566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.719796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.719816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.722562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.724223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.725869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.726552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.726927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.727231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.727525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.727856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.729565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.729818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.729839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.732942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.734600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.734919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.735214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.735524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.735861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.736217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.737698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.739408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.739662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.739682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.742901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.743228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.743518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.743807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.744128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.744719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.746076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.747634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.748576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.748829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.748850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.750645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.751230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.752334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.752624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.752893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.754181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.755841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.757544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.758008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.758284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.758306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.760257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.760568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.760856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.762652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.762906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.764605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.766418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.766885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.768182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.768407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.250 [2024-07-25 10:47:53.768427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.770514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.770835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.771134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.771425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.771786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.772085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.772422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.772712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.773003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.773296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.773318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.775694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.776017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.776336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.776665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.777037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.777364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.777673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.777964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.778262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.778578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.778617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.780805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.781129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.781418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.781709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.781970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.782278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.782568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.782857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.783152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.783460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.783498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.785672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.785994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.786290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.786584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.786858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.787225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.787514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.787803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.788097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.788433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.788454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.790774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.791097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.791395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.791684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.791983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.792346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.792655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.792950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.793269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.793669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.793695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.795772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.796111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.796401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.796691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.797007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.797389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.797695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.798027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.798377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.798703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.798725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.800750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.801123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.801466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.801819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.802191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.802541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.802895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.803251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.803598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.803971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.803996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.806219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.806573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.806930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.807283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.807685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.251 [2024-07-25 10:47:53.808047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.808422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.808787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.809171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.809490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.809514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.812091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.812450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.812779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.813140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.813512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.813893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.814241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.814551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.814935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.815305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.815328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.817404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.817734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.818091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.818451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.818822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.819210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.819585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.819936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.820300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.820728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.820754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.823044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.823425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.823779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.824142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.824494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.824875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.825255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.825611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.825961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.826317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.826340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.828597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.828956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.829317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.829687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.830075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.830460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.830815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.831193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.831556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.831900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.831926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.834235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.834615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.834967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.835330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.835776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.836160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.836506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.836861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.837226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.837628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.837654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.841042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.841418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.841794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.842166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.842597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.842962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.843313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.843675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.844027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.844419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.844441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.846814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.847184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.847545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.252 [2024-07-25 10:47:53.847922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.848213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.850087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.851770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.853175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.854767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.855085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.855116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.857110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.857482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.858157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.859491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.859776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.861412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.862536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.864344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.866192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.866482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.866506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.868619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.869578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.870924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.872654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.872938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.873772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.875509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.877298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.879036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.879346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.879369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.882913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.884446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.886293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.888046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.888367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.889689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.891309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.892987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.893694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.894126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.894168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.897896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.899625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.900860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.902665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.902949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.904642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.906460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.906814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.907188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.907550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.907575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.910845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.911924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.913827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.915717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.915994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.917895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.918251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.918590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.918945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.919351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.919373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.921575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.923283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.925037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.926748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.927042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.927406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.927780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.928158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.928488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.928775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.928799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.931856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.933474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.935179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.936272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.936652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.937017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.937354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.937725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.939590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.939865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.939890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.945769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.946147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.253 [2024-07-25 10:47:53.946202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.254 [2024-07-25 10:47:53.946529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.254 [2024-07-25 10:47:53.946918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.254 [2024-07-25 10:47:53.947276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.254 [2024-07-25 10:47:53.949157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.254 [2024-07-25 10:47:53.951031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.254 [2024-07-25 10:47:53.952733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.254 [2024-07-25 10:47:53.953031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.254 [2024-07-25 10:47:53.953060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.254 [2024-07-25 10:47:53.954957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.254 [2024-07-25 10:47:53.955307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.254 [2024-07-25 10:47:53.955677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.254 [2024-07-25 10:47:53.955759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.516 [2024-07-25 10:47:53.956180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.516 [2024-07-25 10:47:53.957014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.516 [2024-07-25 10:47:53.958344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.516 [2024-07-25 10:47:53.960074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.516 [2024-07-25 10:47:53.961779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.962198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.962234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.963676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.963735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.963788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.963849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.964307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.964411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.964481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.964535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.964588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.964984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.965012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.966720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.966785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.966842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.966894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.967205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.967279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.967341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.967392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.967460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.967746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.967771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.969340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.969422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.969489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.969549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.969927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.969995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.970049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.970108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.970172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.970537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.970563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.972016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.972075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.972152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.972198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.972475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.972544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.972600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.972656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.972712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.972987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.973011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.974715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.974772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.974826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.974878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.975300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.975377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.975443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.975496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.975550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.975877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.975902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.977493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.977562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.977615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.977668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.977992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.978061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.978122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.978185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.978232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.978490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.978516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.980321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.980385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.980449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.980503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.980897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.980966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.981022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.981074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.981150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.981405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.981441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.982950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.983009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.983069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.983143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.983393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.983470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.983525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.983578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.517 [2024-07-25 10:47:53.983632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.983913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.983937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.985919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.985977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.986031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.986084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.986346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.986421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.986494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.986549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.986603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.986883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.986907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.988397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.988473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.988528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.988580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.988859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.988928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.988983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.989036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.989092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.989523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.989550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.991475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.991533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.991587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.991639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.991953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.992023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.992083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.992167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.992215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.992468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.992493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.994046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.994111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.994195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.994240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.994519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.994589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.994646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.994706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.994760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.995174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.995197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.997015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.997074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.997148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.997194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.997450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.997524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.997580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.997641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.997696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.997972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.997996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.999464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.999521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.999579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:53.999632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.000002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.000070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.000146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.000200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.000245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.000627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.000652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.002572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.002634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.002687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.002739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.003012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.003082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.003156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.003217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.003267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.003625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.003651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.005177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.005246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.005295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.005340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.005736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.005806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.518 [2024-07-25 10:47:54.005861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.005913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.005968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.006371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.006392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.007978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.008048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.008108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.008173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.008444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.008518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.008576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.008631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.008684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.008994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.009018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.010618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.010675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.010728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.010780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.011171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.011245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.011291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.011335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.011380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.011791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.011816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.013294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.013360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.013409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.013468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.013788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.013858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.013912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.013966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.014023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.014299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.014325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.016019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.016077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.016155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.016209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.016632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.016701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.016756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.016809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.016863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.017195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.017232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.018774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.018832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.018885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.018937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.019249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.019328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.019390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.019457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.019509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.019784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.019809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.021616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.021673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.021726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.021778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.022176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.022249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.022298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.022348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.022394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.022706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.022731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.024235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.024303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.024350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.024411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.024687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.024757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.024812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.024866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.024920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.025209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.025231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.027213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.027279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.027344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.027389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.027678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.027748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.519 [2024-07-25 10:47:54.027807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.027861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.027917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.028208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.028229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.029692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.029749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.029805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.029860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.030141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.030228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.030276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.030322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.030368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.030734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.030760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.032785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.032842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.032895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.032947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.033250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.033324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.033385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.033452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.033510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.033784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.033809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.035343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.035408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.035473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.035532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.035801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.035871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.035927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.035981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.036035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.036420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.036457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.038275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.038343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.038412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.038486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.038762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.038833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.038887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.038941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.038999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.039279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.039300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.040803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.040861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.040914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.040969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.041311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.041389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.041459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.041511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.041564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.041956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.041981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.043711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.043769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.043829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.043882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.044171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.044232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.044279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.044324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.044369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.044720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.044745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.046217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.046282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.046635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.046693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.047080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.047168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.047223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.047269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.047314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.047714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.047739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.049240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.049305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.049349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.049807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.050081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.050168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.050218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.520 [2024-07-25 10:47:54.050263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.050309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.050582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.050607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.052719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.053073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.054873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.056719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.056990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.058791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.060011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.061323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.063036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.063310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.063332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.065584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.067446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.069175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.070614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.070944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.072666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.074323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.074994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.075337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.075738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.075764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.079342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.081196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.082324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.083666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.083942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.085640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.086356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.086727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.087080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.087525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.087552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.090921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.091470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.092773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.094498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.094775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.096120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.096486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.096838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.097203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.097575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.097600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.099716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.100070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.100423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.100781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.101145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.101482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.101836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.102205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.102573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.102972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.102996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.105242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.105609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.105967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.106311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.106697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.521 [2024-07-25 10:47:54.107061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.107393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.107770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.108147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.108382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.108421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.110720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.111079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.111429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.111791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.112235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.112601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.112958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.113308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.113696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.114090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.114122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.116312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.116687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.117039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.117373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.117733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.118110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.118455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.118808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.119181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.119549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.119574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.121761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.122124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.122469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.122826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.123213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.123596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.123948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.124295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.124649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.125007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.125031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.127291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.127674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.128030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.128384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.128804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.129186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.129559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.129914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.130284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.130671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.130696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.132919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.133275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.133624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.133977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.134367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.134757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.135126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.135487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.135839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.136216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.136252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.138425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.138784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.139157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.139481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.139858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.140231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.140606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.140958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.141309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.141675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.141706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.143911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.144268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.144627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.144987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.145394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.145772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.146132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.146491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.146853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.147210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.147232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.149768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.150148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.150484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.150838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.522 [2024-07-25 10:47:54.151261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.523 [2024-07-25 10:47:54.151630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.523 [2024-07-25 10:47:54.151988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.523 [2024-07-25 10:47:54.152338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.523 [2024-07-25 10:47:54.152700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.523 [2024-07-25 10:47:54.153093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.523 [2024-07-25 10:47:54.153126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.221547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.221651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.221975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.222046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.222355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.222434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.222770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.223182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.223225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.223240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.228608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.230185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.231732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.232357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.232818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.233164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.233526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.233903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.235782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.236073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.236098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.236123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.236153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.239254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.240849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.242376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.242751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.243471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.243825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.244847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.246090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.246367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.246388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.246418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.246432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.251402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.251770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.252129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.252480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.254719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.256251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.257830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.259404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.259773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.259797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.259813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.259828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.783 [2024-07-25 10:47:54.261557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.261912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.262279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.262645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.264177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.265738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.267235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.268119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.268383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.268404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.268417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.268430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.272378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.272947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.274294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.275968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.278159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.278926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.280191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.281799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.282081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.282112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.282136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.282152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.284395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.286231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.287723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.289318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.290318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.292149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.293928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.295721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.296002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.296026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.296041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.296056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.300164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.301756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.303264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.304542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.306381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.307975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.309194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.309564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.309961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.309986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.310003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.310019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.313045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.313839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.315712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.317344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.319204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.319735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.320091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.320468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.320878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.320904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.320920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.320936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.326180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.327721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.329197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.329547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.330271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.330635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.330988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.331334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.331697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.331724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.331739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.331754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.333940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.334297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.334648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.335001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.335754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.336128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.336483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.336836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.337245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.337268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.337281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.337299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.340247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.784 [2024-07-25 10:47:54.340602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.340957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.341322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.342074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.342414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.342783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.343167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.343522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.343546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.343562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.343578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.345811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.346185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.346511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.346865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.347582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.347939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.348298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.348680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.349074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.349099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.349126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.349158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.352043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.352392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.352759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.353148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.353827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.354201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.354535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.354889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.355238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.355274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.355286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.355298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.357558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.357915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.358273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.358664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.359376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.359747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.360100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.360473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.360860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.360884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.360900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.360914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.363769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.364130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.364461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.364816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.365536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.365890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.366246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.366588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.366939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.366964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.366980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.366995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.369184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.369532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.369894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.370253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.370978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.371322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.371688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.372056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.372440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.372461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.372490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.372506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.375356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.375722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.376076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.376423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.377157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.377501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.377854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.378229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.378636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.378662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.378678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.378694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.380841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.381210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.381583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.381939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.382664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.383018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.383355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.785 [2024-07-25 10:47:54.383732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.384095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.384142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.384156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.384180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.386993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.387335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.387698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.388049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.388708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.389066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.389410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.389764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.390185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.390208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.390222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.390235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.392359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.392730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.393090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.393451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.394166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.394511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.394869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.395230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.395672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.395697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.395713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.395728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.398669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.399032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.399371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.399741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.400485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.400552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.400910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.401264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.401646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.401671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.401688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.401704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.403797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.403864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.404234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.404576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.405304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.406896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.408357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.408965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.409273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.409295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.409309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.409321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.415598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.416832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.418377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.419928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.422034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.423462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.425034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.426577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.426934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.426960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.426977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.426992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.430173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.431745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.433305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.434790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.436128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.436473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.438053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.439595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.440620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.442251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.442318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.442729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.442756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.442772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.442789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.442804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.447040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.447110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.448497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.448556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.448833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.450108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.450177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.451704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.451763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.452044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.452074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.786 [2024-07-25 10:47:54.452090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.452113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.452129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.455089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.455169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.456365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.456413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.456711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.458304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.458370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.459056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.459122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.459442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.459467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.459482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.459497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.459512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.466044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.466112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.466455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.466514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.466885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.468529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.468590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.470208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.470274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.470550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.470575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.470591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.470606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.470626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.473674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.473736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.474087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.474151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.474538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.474903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.474960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.475304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.475369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.475666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.475692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.475708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.475722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.475737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.481352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.481419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.482566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.482625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.482902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.483283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.483349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.483710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.483768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.484057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.484082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.484098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.484122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.484151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.487291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.487368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.488477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.488556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:50.787 [2024-07-25 10:47:54.488839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.490373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.490456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.492282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.492351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.492641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.492666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.492682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.492696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.492712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.496575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.496637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.498170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.498233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.498615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.500339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.500416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.502094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.502172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.502409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.502447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.502463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.502477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.502491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.505815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.505876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.505935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.505988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.506362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.506787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.506845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.508072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.508153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.508393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.508413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.508427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.508457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.508471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.513240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.513293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.513339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.513400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.513766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.513834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.513888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.513941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.513994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.514347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.514370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.514383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.514395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.049 [2024-07-25 10:47:54.514408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.516071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.516151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.516200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.516244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.516512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.516587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.516644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.516705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.516758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.517035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.517059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.517075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.517089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.517112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.521125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.521190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.521253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.521298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.521567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.521638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.521697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.521752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.521804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.522233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.522255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.522284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.522297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.522311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.523791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.523850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.523914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.523968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.524277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.524353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.524415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.524481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.524538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.524823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.524847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.524863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.524877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.524893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.528596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.528656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.528711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.528765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.529038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.529116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.529185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.529230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.529276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.529545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.529570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.529585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.529600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.529614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.531166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.531231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.531276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.531320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.531596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.531668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.531723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.531783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.531837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.532170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.532192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.532205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.532222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.532235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.534655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.534717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.534776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.534830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.535114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.535192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.535240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.535288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.535333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.535609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.535634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.535649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.535664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.535678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.537197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.537262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.537311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.537356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.537785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.537854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.537908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.537964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.538017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.538409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.538446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.538464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.538479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.538495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.541977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.542040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.542094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.542172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.542407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.542481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.542553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.542608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.542664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.542941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.542965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.542980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.542995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.543009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.544608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.544667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.544720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.544774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.545125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.545215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.545263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.545308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.545352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.545782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.545807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.545824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.545840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.545855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.549409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.549470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.549524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.549582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.549859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.549930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.549985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.550039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.550093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.550507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.050 [2024-07-25 10:47:54.550533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.550548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.550564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.550580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.552534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.552592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.552649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.552702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.552980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.553049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.553116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.553186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.553232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.553496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.553522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.553537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.553552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.553566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.557469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.557528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.557581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.557633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.558035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.558117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.558182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.558228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.558274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.558603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.558627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.558643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.558658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.558672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.560322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.560387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.560451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.560520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.560797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.560867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.560924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.560978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.561031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.561355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.561392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.561407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.561422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.561436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.566180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.566244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.566290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.566335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.566732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.566802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.566856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.566909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.566969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.567296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.567333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.567346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.567357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.567369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.568921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.568980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.569038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.569090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.569372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.569451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.569507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.569565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.569618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.569894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.569918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.569933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.569949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.569964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.573450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.573510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.573563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.573617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.573998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.574068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.574130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.574191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.574238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.574508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.574548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.574566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.574581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.574595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.576057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.576125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.576190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.576236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.576504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.576575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.576631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.576689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.576748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.577023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.577049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.577065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.577079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.577094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.579524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.579582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.579639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.579693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.579969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.580039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.580094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.580169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.580216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.580577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.580602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.580617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.580631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.580655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.582118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.582177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.582251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.582298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.582576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.582647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.582703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.582761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.582817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.583237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.583259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.583288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.583302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.583316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.588607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.588667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.588725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.588778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.051 [2024-07-25 10:47:54.589075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.590303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.590368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.590433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.590490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.590770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.590794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.590810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.590825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.590839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.592821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.592898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.592960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.593013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.593406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.593492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.593547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.593599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.593653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.593963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.593987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.594003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.594018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.594032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.598814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.598878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.598931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.598984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.599318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.599408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.599474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.599528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.599582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.599875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.599900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.599915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.599930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.599945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.601720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.601778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.601831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.601884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.602191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.602253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.602307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.602354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.603927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.604216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.604238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.604251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.604263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.604275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.608268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.608638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.608696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.609046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.609448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.609518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.610542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.610601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.611786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.612068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.612092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.612117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.612146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.612159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.613675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.615210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.615276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.616375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.616671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.616741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.617095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.617176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.617505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.617789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.617814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.617830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.617846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.617861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.621270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.622759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.622818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.624270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.624569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.624641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.626269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.626338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.626698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.627120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.627146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.627162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.627178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.627207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.630312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.631167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.631233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.632792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.633074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.633164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.634873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.634932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.636303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.636636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.636662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.636678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.636692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.636706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.052 [2024-07-25 10:47:54.639162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.640677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.640737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.642573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.642854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.642930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.643848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.643907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.645137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.645426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.645451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.645466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.645481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.645496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.649433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.651037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.651099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.652700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.652981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.653052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.654517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.654577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.656159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.656456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.656481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.656497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.656517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.656532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.661464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.663007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.663066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.663408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.663755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.663824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.665055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.665122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.666671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.666957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.666981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.666998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.667013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.667027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.670787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.671161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.671490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.671842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.672151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.672225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.673436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.673496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.675108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.675396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.675421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.675436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.675450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.675465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.680259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.680621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.681887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.682497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.682901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.683656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.685232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.686794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.687838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.688136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.688175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.688189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.688201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.688213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.692267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.692935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.694189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.694927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.695242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.696998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.697595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.698924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.699276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.699629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.699654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.699669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.699684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.699698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.703202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.703558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.703918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.704388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.704720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.705990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.706333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.707211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.708227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.708608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.708634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.708651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.708667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.708683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.711587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.711951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.712638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.713839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.714244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.714882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.716195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.716549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.716905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.717289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.717310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.717338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.717351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.717363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.720209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.721714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.722067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.722646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.722927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.723301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.723682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.724039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.724382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.724803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.724830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.724847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.724863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.724879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.727622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.727977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.729755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.730115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.730502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.730869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.731233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.731617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.731969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.732375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.732402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.732419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.732435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.732451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.053 [2024-07-25 10:47:54.738121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.738476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.738829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.739199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.739568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.739933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.740283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.740636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.740994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.741338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.741375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.741388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.741400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.741413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.744039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.744393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.744753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.745112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.745504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.745866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.746231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.746571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.746926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.747263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.747299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.747312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.747324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.747335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.751235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.751872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.054 [2024-07-25 10:47:54.753525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.755260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.755681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.756051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.756399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.756773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.757149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.757482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.757507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.757529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.757545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.757559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.760251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.760617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.760973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.761326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.761729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.762093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.762433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.762799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.763180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.763485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.763509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.763525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.763540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.763554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.766405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.766767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.767128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.767489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.767889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.768280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.768666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.769944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.770544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.770937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.770964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.770981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.770996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.314 [2024-07-25 10:47:54.771018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.775152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.775502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.775857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.776220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.776573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.777462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.778457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.778811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.779967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.780318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.780340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.780353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.780366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.780393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.783231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.783589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.783945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.784950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.785280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.785648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.786559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.787492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.787845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.788226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.788247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.788275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.788288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.788300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.791206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.791850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.793081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.793435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.793782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.795153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.795499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.795853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.796222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.796650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.796676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.796692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.796706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.796721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.800883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.801259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.801864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.803163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.803558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.803924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.804282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.804637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.804990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.805349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.805386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.805400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.805412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.805424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.808040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.808378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.810253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.810614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.811014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.811365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.811736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.813436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.813802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.814229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.814250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.814279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.814293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.814306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.821039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.821395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.823185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.824887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.825184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.825845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.827069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.827418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.828313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.828655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.828680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.828696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.828711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.828725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.833969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.835611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.837292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.839007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.839356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.841215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.315 [2024-07-25 10:47:54.841566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.841924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.843519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.843920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.843945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.843962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.843978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.843994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.849004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.850536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.852117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.853087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.853359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.853910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.854269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.855810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.856183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.856592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.856619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.856635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.856652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.856667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.863461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.865293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.867178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.867840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.868153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.868489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.869182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.870344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.870704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.871043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.871078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.871094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.871117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.871146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.875934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.877565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.877920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.879501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.879911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.879981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.880441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.881871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.882231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.882577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.882602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.882617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.882632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.882646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.886363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.887956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.889524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.889905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.890210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.890561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.890912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.892593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.892947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.893338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.893359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.893371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.893403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.893419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.898955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.900539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.901570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.903126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.903522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.903889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.905310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.905704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.906056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.906336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.906358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.906370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.906383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.906410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.912020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.913500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.914541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.915330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.915758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.916713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.917630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.917983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.918041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.918323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.918344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.918358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.918370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.316 [2024-07-25 10:47:54.918382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.923747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.923816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.925215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.925280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.925565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.926258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.926325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.926679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.926747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.927033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.927057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.927073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.927088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.927112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.931364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.931431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.932662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.932722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.933007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.934586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.934646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.935007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.935066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.935357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.935379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.935392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.935422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.935437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.939189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.939257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.940811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.940877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.941305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.943168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.943234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.945043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.945113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.945368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.945389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.945402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.945414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.945426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.948744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.948805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.950664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.950726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.951008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.952836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.952906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.954539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.954598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.954926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.954950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.954966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.954981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.954996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.958638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.958700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.960195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.960261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.960657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.961478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.961544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.962775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.962835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.963126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.963163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.963176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.963188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.963201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.968643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.968705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.969600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.969659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.970054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.971073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.971141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.971660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.971718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.972124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.972162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.972176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.972188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.972214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.978710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.978773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.980343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.980419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.980805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.982287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.982353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.317 [2024-07-25 10:47:54.982727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.982790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.983212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.983234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.983247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.983259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.983286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.988831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.988894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.988948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.989003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.989289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.990942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.991008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.992855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.992931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.993194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.993217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.993230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.993243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.993256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.997572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.997632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.997692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.997751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.998043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.998124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.998208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.998256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.998304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.998665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.998698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.998720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.998735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:54.998750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.002172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.002226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.002273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.002320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.002721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.002790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.002867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.002920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.002985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.003278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.003301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.003314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.003327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.003340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.008076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.008169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.008217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.008263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.008628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.008705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.008766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.008818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.008872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.009262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.009284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.009298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.009311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.009324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.011616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.011680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.011741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.011794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.012069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.012162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.012213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.012260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.012311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.012712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.012738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.012754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.012768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.012783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.016360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.016437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.016507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.016560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.016930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.017009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.017065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.017152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.017218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.017672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.017705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.017721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.017738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.318 [2024-07-25 10:47:55.017753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.021366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.021450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.021525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.021579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.021887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.021955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.022014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.022070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.022150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.022422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.022458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.022475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.022490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.022505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.025364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.025431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.025498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.025552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.025882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.025951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.026006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.026062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.026150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.026408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.026434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.026450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.026464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.026479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.030766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.030829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.030891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.030955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.031322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.031414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.031495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.031555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.031611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.032029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.032054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.032071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.032087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.032110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.580 [2024-07-25 10:47:55.035454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.035522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.035575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.035628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.035926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.035996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.036052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.036126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.036191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.036452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.036477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.036493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.036508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.036523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.040767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.040827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.040884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.040937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.041231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.041295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.041342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.041424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.041490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.041892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.041917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.041934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.041949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.041966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.046685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.046748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.046813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.046876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.047194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.047255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.047303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.047352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.047415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.047716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.047741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.047757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.047772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.047787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.052345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.052396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.052467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.052521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.052829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.052900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.052956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.053014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.053069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.053351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.053407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.053421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.053434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.053461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.058480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.058550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.058606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.058659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.058938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.059006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.059062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.059148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.059197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.059575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.059601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.059617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.059633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.059648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.064649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.064708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.064761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.064814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.065156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.065250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.065300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.065346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.065417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.065744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.065769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.065784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.065804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.065819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.070933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.070993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.071047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.071118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.071473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.071543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.581 [2024-07-25 10:47:55.071598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.071652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.071706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.072085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.072117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.072134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.072163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.072177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.075955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.076014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.076075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.076138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.076419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.076499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.076555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.076613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.076668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.077016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.077040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.077055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.077070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.077084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.081230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.081299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.081344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.081414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.081694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.081763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.081824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.081877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.081931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.082226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.082248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.082262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.082274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.082286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.085901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.085960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.086013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.086070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.086470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.086540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.086594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.086647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.086700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.087018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.087042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.087057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.087072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.087087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.090714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.090777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.090832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.090885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.091190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.091251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.091300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.091349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.091408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.091689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.091713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.091729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.091743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.091757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.095324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.095390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.095462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.095515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.095898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.095965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.096019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.096077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.096144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.096421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.096446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.096462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.096477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.096492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.101032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.101091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.101156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.101219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.101482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.102385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.102467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.102520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.102573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.102970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.102995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.103012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.103027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.103043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.582 [2024-07-25 10:47:55.107400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.107462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.107515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.107568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.107951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.108019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.108076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.108137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.108204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.108492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.108531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.108547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.108562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.108576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.113826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.113906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.113959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.114011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.114337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.114413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.114484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.114537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.114590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.114992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.115017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.115033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.115047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.115062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.118987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.119046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.119112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.119179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.119417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.119499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.119555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.119609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.120306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.120596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.120621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.120637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.120652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.120667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.123046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.124462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.124523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.125493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.125778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.125847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.127459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.127520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.129364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.129660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.129685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.129706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.129722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.129737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.134002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.135546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.135605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.137156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.137412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.137496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.137990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.138048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.139283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.139580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.139606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.139621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.139636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.139650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.143226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.143585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.143643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.143994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.144405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.144508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.144865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.144923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.146316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.146729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.146756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.146772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.146788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.146803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.149797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.151150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.151215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.151552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.151872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.151941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.152814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.152873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.153231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.583 [2024-07-25 10:47:55.153531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.153557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.153573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.153587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.153603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.155927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.156281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.156335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.156699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.157032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.157110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.158943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.159004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.159340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.159720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.159745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.159760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.159774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.159789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.163211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.163579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.163643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.164563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.164870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.164939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.165317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.165383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.165744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.166072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.166097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.166123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.166138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.166167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.170741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.171112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.171192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.171541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.171826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.171895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.172265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.172330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.172688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.172968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.172992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.173008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.173024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.173040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.175532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.175905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.177436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.177787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.178197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.178277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.178634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.178710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.179185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.179453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.179490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.179507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.179522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.179536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.182235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.183937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.184288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.184638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.184914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.185289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.185659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.186021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.186358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.186656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.186681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.186697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.186712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.186727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.189446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.191070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.191425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.191776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.192053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.192424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.192777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.193157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.193500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.193786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.193810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.193825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.193839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.193854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.196614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.584 [2024-07-25 10:47:55.197940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.198475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.198840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.199146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.199804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.200176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.200943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.202058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.202358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.202380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.202393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.202406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.202433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.205468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.207322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.208875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.210458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.210877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.212736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.214408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.216120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.216516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.216793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.216823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.216841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.216855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.216870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.219579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.221385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.221750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.222112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.222396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.222758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.223117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.223466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.223820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.224110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.224135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.224165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.224179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.224191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.227489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.228067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.229391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.229759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.230121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.231776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.232149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.232476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.232837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.233252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.233274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.233286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.233298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.233330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.239012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.239354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.240865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.241259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.241672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.242861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.243536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.243889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.244259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.244601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.244627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.244643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.244657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.244672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.585 [2024-07-25 10:47:55.248946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.249303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.249816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.251260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.251684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.252051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.253855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.254219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.254562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.254946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.254970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.254985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.255001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.255016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.257728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.258091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.258443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.260123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.260517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.260881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.262221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.262785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.263157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.263453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.263478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.263495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.263509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.263524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.266121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.266483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.266838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.267828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.268175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.268521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.269252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.270364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.270735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.271089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.271122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.271153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.271166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.271178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.273753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.274117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.274457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.274811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.275095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.275458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.275813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.277624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.277977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.278384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.278423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.278438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.278454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.278469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.280619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.282443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.282842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.283215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.283518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.283886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.284262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.586 [2024-07-25 10:47:55.285644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.286293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.286795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.286835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.286852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.286868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.286884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.290143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.291225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.292094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.293125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.293412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.294024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.294406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.295941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.296325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.296722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.296748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.296765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.296782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.296797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.298698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.300274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.302177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.303999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.304287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.304832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.306217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.306572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.307265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.307565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.307590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.307606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.307621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.307636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.310800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.311596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.313433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.315020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.315297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.316876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.317310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.319120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.319470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.319860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.319885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.319900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.319915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.319929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.323224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.324836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.326183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.327631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.327942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.329527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.331096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.331995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.333652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.334069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.334095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.334120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.334152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.334166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.337700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.339446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.341210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.342978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.343324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.344920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.346741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.348616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.350470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.350805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.350829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.350851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.350867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.350881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.352928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.354147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.355347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.356943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.357237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.358281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.360005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.361338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.362942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.363239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.363261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.849 [2024-07-25 10:47:55.363274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.363287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.363299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.366254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.366619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.367476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.368627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.368905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.370509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.372070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.373269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.374512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.374795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.374820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.374836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.374850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.374865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.376826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.378622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.378975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.379424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.379707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.381524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.383367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.385258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.386047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.386357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.386379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.386410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.386423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.386434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.388254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.388620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.390418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.390777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.391187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.391264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.392948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.394832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.396688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.396971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.396996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.397011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.397026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.397042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.398542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.398940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.400629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.400989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.401377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.403238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.403597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.403974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.405460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.405764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.405789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.405805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.405819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.405834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.408892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.410433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.410921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.412668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.413109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.413461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.415264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.415633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.415986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.416271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.416294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.416308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.416321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.416333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.419039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.420602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.422179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.422994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.423276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.423636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.423988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.425741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.425804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.426255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.426277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.426307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.426320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.426334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.429302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.429371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.430848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.430907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.431233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.432363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.432436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.434025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.434085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.434482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.434508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.434523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.434538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.434552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.436588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.436648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.437907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.437966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.438278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.439853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.439914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.441467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.441531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.441941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.441964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.441980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.441995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.442009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.445367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.445439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.445801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.445860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.446250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.447911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.447970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.448313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.448362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.448702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.448727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.448742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.448757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.448772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.452112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.452180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.453931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.453991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.454268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.455841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.455900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.457236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.457302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.457704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.457734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.457753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.457769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.457785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.461449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.461512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.463351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.463419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.463717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.465432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.465492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.466782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.466840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.467169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.467191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.467205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.467217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.467229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.469062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.469149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.470014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.470073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.470407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.470773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.470831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.472338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.472412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.472704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.472729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.472744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.472759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.472779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.850 [2024-07-25 10:47:55.475651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.475711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.476060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.476133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.476392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.477184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.477249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.477583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.477642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.477944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.477968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.477984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.477998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.478013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.480184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.480250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.480305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.480351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.480763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.481159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.481209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.482696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.482755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.483032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.483057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.483072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.483087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.483107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.484615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.484678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.484732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.484784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.485059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.485144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.485213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.485260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.485306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.485676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.485701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.485716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.485731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.485746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.487617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.487676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.487732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.487785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.488059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.488160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.488210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.488257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.488305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.488591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.488616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.488631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.488646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.488661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.490202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.490267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.490312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.490359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.490780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.490849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.490904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.490957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.491011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.491415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.491464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.491481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.491496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.491511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.493289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.493356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.493435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.493493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.493767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.493835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.493891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.493949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.494003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.494313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.494334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.494347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.494360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.494372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.495926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.495984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.496037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.496089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.496489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.496559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.496614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.496674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.496731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.497149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.497171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.497185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.497212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.497225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.498766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.498824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.498876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.498928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.499314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.499396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.499466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.499518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.499572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.499869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.499894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.499909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.499924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.499938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.501648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.501706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.501759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.501812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.502252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.502326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.502373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.502436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.502490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.502848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.502872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.502887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.502903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.502917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.504446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.504506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.504563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.504616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.504929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.504997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.505051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.505125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.505191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.505453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.505478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.505494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.505508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.505523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.507345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.507409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.507465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.507517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.507906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.507974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.508030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.508081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.508156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.508448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.508473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.508488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.508508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.508523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.851 [2024-07-25 10:47:55.510056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.510136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.510198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.510242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.510514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.510584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.510640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.510693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.510750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.511027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.511052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.511068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.511083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.511098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.513070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.513149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.513207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.513252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.513527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.513602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.513657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.513711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.513764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.514037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.514061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.514076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.514091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.514113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.515650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.515708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.515765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.515818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.516091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.516178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.516228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.516278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.516325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.516715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.516741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.516756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.516771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.516786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.518657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.518718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.518774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.518826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.519100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.519186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.519234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.519280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.519325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.519594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.519619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.519635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.519649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.519664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.521196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.521260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.521305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.521359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.521713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.521782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.521836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.521887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.521941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.522335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.522356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.522386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.522399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.522413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.524276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.524344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.524389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.524452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.524728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.524798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.524853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.524906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.524964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.525299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.525334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.525347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.525359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.525385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.526944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.527003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.527061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.527121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.527486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.527558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.527613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.527670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.527722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.528154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.528193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.528224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.528237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.528251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.529815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.529873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.529926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.529978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.530330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.530405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.530466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.530524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.530578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.530883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.530907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.530922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.530937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.530951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.532705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.532764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.532818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.532871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.533278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.533352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.533414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.533480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.533539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.533922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.533946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.533961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.533976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.533990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.535496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.535554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.535610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.535662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.535976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.536046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.536119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.536183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.536232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.536509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.536534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.536550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.536564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.536579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.538368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.538440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.538497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.538551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.538937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.539003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.539058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.539149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.539198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.539489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.539514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.539535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.539550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.539565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.541091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.541181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.541230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.541276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.541571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.543162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.543228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.543275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.543320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.543692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.543717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.543733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.543748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.543762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.547032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.547109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.547176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.547221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.547512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.547583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.547650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.852 [2024-07-25 10:47:55.547716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.547769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.548043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.548067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.548083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.548099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.548148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.549591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.549653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.549707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.549759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.550116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.550195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.550244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.550292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.550355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.550819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.550849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.550866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.550881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.550896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.552665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.552738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.552794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.552847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.553159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.553221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.553273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:51.853 [2024-07-25 10:47:55.553319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.554642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.555002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.555037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.555054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.555069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.555084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.556791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.557167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.557240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.557603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.557975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.558050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.559663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.559722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.561288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.561638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.561663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.561679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.561694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.561710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.563187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.563565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.563625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.563972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.564383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.564483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.564859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.564918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.566048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.566350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.566373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.566404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.566420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.566434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.568131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.568498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.568556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.568905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.569290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.569355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.569728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.569794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.570165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.570562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.570589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.570605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.570621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.570637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.572535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.572896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.572953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.573310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.573667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.573735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.574086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.574164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.574489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.574856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.574881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.574898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.574913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.574928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.576858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.577228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.577279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.577624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.578008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.578076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.578403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.578477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.578828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.579231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.114 [2024-07-25 10:47:55.579254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.579267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.579279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.579292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.581327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.581697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.581755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.582114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.582457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.582538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.582904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.582961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.583304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.583653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.583679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.583695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.583709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.583724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.585546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.585899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.585957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.586321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.586723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.586791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.587163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.587214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.587543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.587917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.587942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.587957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.587972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.587988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.589954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.590305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.590667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.591018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.591418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.591500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.591848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.591911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.592265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.592629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.592654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.592670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.592685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.592699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.595061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.595411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.595775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.596146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.596514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.596874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.597251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.597614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.597969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.598361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.598383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.598396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.598430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.598447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.600824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.601191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.601554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.601904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.602259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.602617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.602973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.603315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.603685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.604075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.604107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.604126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.604141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.604173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.607283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.609024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.610736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.611089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.611487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.611848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.612211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.612720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.614083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.614346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.614368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.614381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.614394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.614422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.616732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.617100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.617450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.115 [2024-07-25 10:47:55.617801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.618224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.618593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.618950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.619301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.619668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.620089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.620122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.620154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.620167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.620182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.622354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.622740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.623091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.623433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.623787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.624167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.624524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.624874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.625231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.625573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.625600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.625615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.625630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.625646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.627848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.628214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.628588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.628948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.629326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.629709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.630061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.630389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.630758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.631127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.631166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.631180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.631192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.631204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.633395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.633766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.634125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.634478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.634873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.635240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.635586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.635941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.636288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.636675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.636700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.636715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.636730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.636745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.638956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.639307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.639661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.640014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.640343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.640716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.641076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.641403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.641773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.642166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.642187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.642213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.642226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.642239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.644431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.644788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.645163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.645504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.645914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.646273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.646651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.647005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.647345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.647689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.647715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.647732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.647747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.647761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.651436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.651808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.652183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.652550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.652906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.653267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.653623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.653987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.654336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.654743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.654770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.116 [2024-07-25 10:47:55.654787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.654804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.654820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.658117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.659692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.660742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.662046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.662312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.663890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.664612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.664964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.665306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.665745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.665773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.665789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.665806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.665822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.668964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.670151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.671395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.672957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.673241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.674351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.674730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.675082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.675440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.675823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.675849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.675871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.675888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.675902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.678317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.679597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.681175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.682748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.683052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.683406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.683759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.684118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.684475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.684754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.684777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.684792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.684807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.684822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.687504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.689078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.690629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.691799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.692184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.692564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.692915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.693266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.694874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.695191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.695212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.695225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.695237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.695254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.698316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.699917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.701419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.701778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.702185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.702546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.702899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.704204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.705438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.705723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.705747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.705763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.705777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.705792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.709111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.710875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.711236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.711575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.711938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.712312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.713334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.714593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.716166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.716431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.716471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.716486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.716501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.716515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.719748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.720114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.720454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.720806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.721221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.117 [2024-07-25 10:47:55.721983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.723218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.724788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.726331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.726642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.726667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.726683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.726698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.726712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.728523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.728878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.729251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.729611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.729923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.731190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.732765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.734323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.735328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.735627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.735652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.735668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.735682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.735697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.737537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.737904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.738257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.739452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.739764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.741321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.742920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.744065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.745675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.745981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.746005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.746020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.746035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.746049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.748041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.748383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.749191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.750420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.750701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.752266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.753752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.755051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.756280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.756574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.756598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.756614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.756628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.756642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.758730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.759131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.760605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.762342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.762638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.762712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.764260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.765513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.766742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.767021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.767046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.767061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.767076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.767090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.768925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.118 [2024-07-25 10:47:55.769278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.769847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.771198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.771475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.773299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.775113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.776076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.777299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.777600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.777625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.777640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.777655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.777669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.779900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.780256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.781910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.783778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.784069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.785841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.786564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.787809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.789358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.789656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.789692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.789708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.789723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.789738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.791945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.793687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.795094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.796683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.796967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.797522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.799128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.801028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.801092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.801354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.801375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.801407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.801421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.801435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.803593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.803654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.804025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.804109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.804438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.804803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.804861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.805219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.805284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.805718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.805743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.805760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.805781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.805798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.807921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.807991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.808329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.808380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.808765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.809148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.809198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.810177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.810242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.810509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.810534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.810550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.810564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.810579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.813556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.813617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.815177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.815242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.815593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.815961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.816019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.816395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.816483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.816899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.816928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.816945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.816961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.119 [2024-07-25 10:47:55.816976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.820006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.820096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.821843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.821909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.822204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.823777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.823838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.825699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.825762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.826127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.826165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.826177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.826190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.826201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.829209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.829275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.830826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.830884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.831221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.831762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.831821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.833091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.833171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.833425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.833451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.833467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.833481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.833496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.835590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.835650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.836163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.836232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.836525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.838125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.838197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.839738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.839797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.840158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.840192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.840206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.840218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.840231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.841952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.842012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.842346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.842396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.842790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.843171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.843221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.845066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.845134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.845384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.845424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.845440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.845455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.845470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.848182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.848248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.848294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.848339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.848614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.849409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.849474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.849822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.849878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.850245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.850283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.850296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.850308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.382 [2024-07-25 10:47:55.850320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.851989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.852047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.852100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.852175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.852413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.852496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.852553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.852606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.852660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.852940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.852964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.852980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.852994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.853009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.854559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.854617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.854672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.854725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.855120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.855194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.855257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.855302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.855350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.855741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.855767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.855785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.855800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.855816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.857278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.857342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.857392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.857452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.857781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.857849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.857903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.857960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.858014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.858288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.858309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.858323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.858335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.858348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.860030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.860089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.860171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.860217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.860604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.860671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.860725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.860778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.860832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.861157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.861178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.861196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.861209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.861221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.862696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.862754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.862807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.862860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.863188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.863276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.863324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.863373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.863433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.863716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.863741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.863757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.863771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.863786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.865669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.865728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.865782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.865835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.866230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.866303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.866349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.866394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.866460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.866777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.866801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.866816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.866831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.866846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.868351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.868415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.868475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.868539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.868812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.868881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.383 [2024-07-25 10:47:55.868942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.868999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.869053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.869344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.869367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.869380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.869409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.869424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.871425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.871486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.871543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.871595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.871923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.871991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.872046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.872110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.872176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.872428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.872453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.872469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.872484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.872498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.874029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.874087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.874153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.874217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.874475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.874546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.874602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.874656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.874710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.875148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.875170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.875185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.875198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.875211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.877030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.877090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.877165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.877210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.877469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.877540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.877595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.877648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.877706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.877982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.878006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.878022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.878038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.878053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.879514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.879571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.879629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.879682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.880041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.880119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.880183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.880243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.880289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.880674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.880699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.880716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.880731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.880745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.882463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.882522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.882579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.882631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.882908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.882976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.883031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.883093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.883156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.883453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.883478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.883494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.883509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.883523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.885056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.885121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.885185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.885230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.885605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.885673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.885728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.885787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.885844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.886274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.886296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.384 [2024-07-25 10:47:55.886325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.886338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.886351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.887894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.887951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.888003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.888055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.888379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.888457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.888513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.888566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.888622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.888930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.888954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.888970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.888984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.888999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.890673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.890731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.890784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.890837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.891230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.891303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.891349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.891411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.891464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.891836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.891866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.891882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.891897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.891911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.893420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.893482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.893558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.893614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.893887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.893956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.894011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.894069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.894130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.894398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.894423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.894439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.894453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.894467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.896429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.896487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.896540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.896593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.896870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.896939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.896998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.897052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.897116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.897370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.897391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.897404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.897436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.897449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.898989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.899047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.899123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.899185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.899443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.899513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.899569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.899623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.899677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.900116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.900156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.900171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.900185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.900211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.902156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.902226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.902272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.902316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.902703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.902772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.902828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.902880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.902934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.903322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.903344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.903373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.903385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.903398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.905315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.905383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.905451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.385 [2024-07-25 10:47:55.905505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.905896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.905962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.906016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.906075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.906151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.906513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.906539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.906555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.906571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.906586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.908478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.908537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.908590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.908642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.909055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.909412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.909473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.909533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.909588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.909934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.909960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.909976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.909991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.910005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.912314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.912386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.912460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.912515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.912866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.912937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.912992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.913045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.913117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.913543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.913569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.913586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.913603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.913619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.915512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.915580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.915652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.915706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.916063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.916137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.916200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.916245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.916290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.916675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.916702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.916717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.916732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.916747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.918676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.918734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.918787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.918838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.919248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.919319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.919375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.919441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.919801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.920222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.920244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.920257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.920285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.920297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.922316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.922685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.386 [2024-07-25 10:47:55.922744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.923094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.923451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.923535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.923893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.923951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.924294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.924660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.924686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.924701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.924716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.924731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.926585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.926939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.926996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.927334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.927741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.927809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.928182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.928233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.928574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.928966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.928990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.929007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.929022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.929037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.930966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.931313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.931370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.931736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.932152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.932230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.932572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.932630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.932981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.933419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.933440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.933470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.933485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.933500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.935474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.935831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.935890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.936260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.936634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.936702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.937051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.937115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.937453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.937838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.937864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.937886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.937902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.937917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.939585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.939938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.939995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.940346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.940733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.940801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.941958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.942017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.943231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.943509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.943534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.943549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.943564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.943578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.945085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.946670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.946730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.947676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.948122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.948209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.948538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.948598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.948945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.949310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.949332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.949345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.949357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.949369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.950949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.952478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.952538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.954368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.954812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.954880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.955238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.955301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.387 [2024-07-25 10:47:55.955668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.956045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.956070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.956086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.956110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.956142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.957942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.958291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.958648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.959000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.959318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.959379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.959745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.959801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.960168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.960534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.960559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.960575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.960591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.960606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.962842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.963218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.963579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.963940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.964281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.964652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.965004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.965342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.965711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.966045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.966070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.966085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.966100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.966124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.968370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.968740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.969095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.969440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.969839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.970217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.970589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.970943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.971293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.971712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.971738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.971754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.971771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.971786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.974092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.974433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.974793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.975162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.975504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.975874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.976235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.976578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.976929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.977341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.977364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.977397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.977410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.977423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.979614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.979973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.980337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.980699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.981051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.981387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.981749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.982108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.982453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.982832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.982857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.982872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.982886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.982901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.985213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.985576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.985929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.986278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.986695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.987057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.988186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.988933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.989291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.989712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.989738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.989754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.989770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.989786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.993176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.993672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.388 [2024-07-25 10:47:55.995275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:55.995644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:55.996044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:55.996380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:55.996741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:55.997529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:55.998786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:55.999061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:55.999085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:55.999107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:55.999124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:55.999154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.002466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.004319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.004692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.005045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.005400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.005775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.006878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.008168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.009727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.010008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.010037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.010053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.010068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.010082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.013367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.013741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.014093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.014431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.014872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.015442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.016715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.018272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.019840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.020128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.020165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.020179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.020192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.020204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.021975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.022331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.022702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.023055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.023373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.024607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.026173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.027733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.028827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.029115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.029153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.029167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.029179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.029196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.030999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.031343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.031706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.032442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.032749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.034502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.036238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.037917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.039034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.039331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.039353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.039366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.039395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.039410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.041456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.041812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.042508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.043753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.044035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.045653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.047185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.048416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.049664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.049946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.049970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.049987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.050001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.050016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.052097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.052483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.053979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.055752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.056034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.057494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.057984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.059373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.389 [2024-07-25 10:47:56.061050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.061326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.061349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.061362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.061375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.061405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.063711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.065584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.067234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.068941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.069244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.069787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.071177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.072823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.074366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.074667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.074693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.074709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.074723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.074739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.077716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.078960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.080539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.082117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.082419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.083846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.085069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.390 [2024-07-25 10:47:56.086677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.088269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.088660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.088686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.088702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.088717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.088732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.092005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.093524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.095126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.096553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.096906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.098199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.099794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.101357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.102304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.102694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.102719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.102734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.102749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.102765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.105812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.107355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.108949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.109380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.109668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.111388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.113236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.115117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.115465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.115871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.115897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.115913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.115928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.115944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.119199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.120776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.121401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.123166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.123422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.125014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.126650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.127007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.127346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.127746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.127771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.127787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.127802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.127818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.131243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.132914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.134009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.135259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.653 [2024-07-25 10:47:56.135555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.137154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.138336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.138706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.139057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.139483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.139510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.139527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.139543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.139560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.142824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.143898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.145164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.146750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.147034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.148245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.148600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.148953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.149324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.149740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.149766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.149783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.149799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.149814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.152936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.153298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.153648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.154000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.154424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.154799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.155186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.155517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.155874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.156292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.156314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.156347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.156361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.156373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.159312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.160048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.160385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.160751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.161173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.161247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.161609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.163268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.164961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.165252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.165274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.165288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.165300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.165313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.166812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.168667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.169023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.169362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.169744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.170117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.170553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.171975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.173705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.173989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.174014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.174029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.174043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.174058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.177176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.177541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.177892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.178248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.178662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.179188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.180533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.182177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.183772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.184050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.184076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.184092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.184116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.184132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.186171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.186521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.186873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.187239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.187654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.189391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.190825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.192393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.192455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.192732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.654 [2024-07-25 10:47:56.192756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.192772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.192787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.192801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.195122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.195197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.195550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.195617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.195971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.196325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.196375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.197298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.197364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.197694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.197719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.197735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.197749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.197764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.201067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.201162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.202748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.202808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.203120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.203464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.203526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.203875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.203932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.204341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.204363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.204391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.204404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.204417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.207792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.207856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.209085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.209164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.209447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.210979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.211039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.212582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.212641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.213004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.213029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.213045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.213061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.213075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.216177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.216245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.217779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.217839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.218126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.218976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.219037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.220656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.220719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.221001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.221026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.221041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.221056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.221071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.223298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.223366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.223726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.223786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.224066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.225780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.225843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.227612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.227678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.227962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.227987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.228002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.228018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.228033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.655 [2024-07-25 10:47:56.229924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.229985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.230325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.230375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.230734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.231099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.231189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.232333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.232398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.232716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.232741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.232756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.232771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.232786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.235882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.235944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.237475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.237535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.237876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.238252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.238303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.238653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.238717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.239172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.239197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.239227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.239241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.239255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.242231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.242299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.242346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.242391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.242684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.244174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.244240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.246032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.246097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.246364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.246386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.246415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.246431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.246445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.248467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.248533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.248587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.248640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.248939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.249009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.249065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.249127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.249192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.249453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.249478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.249494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.249508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.249528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.251094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.251160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.251222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.251270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.251544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.251616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.251673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.251728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.251782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.252189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.252210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.252238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.252251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.252264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.254178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.254245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.254291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.254335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.254615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.254686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.254742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.254796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.254862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.255149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.255175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.255191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.255206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.255221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.256677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.256767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.256827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.256881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.257287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.257361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.257427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.656 [2024-07-25 10:47:56.257481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.257535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.257910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.257935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.257951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.257966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.257981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.259715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.259772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.259826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.259882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.260183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.260245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.260296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.260342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.260388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.260684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.260708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.260724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.260738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.260754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.262293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.262358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.262418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.262463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.262887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.262956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.263020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.263080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.263156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.263529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.263555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.263571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.263586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.263602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.265223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.265289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.265342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.265386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.265697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.265770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.265825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.265877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.265934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.266227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.266248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.266262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.266274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.266286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.268079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.268146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.268215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.268262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.268640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.268709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.268764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.268829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.268883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.269216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.269238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.269251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.269263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.269276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.271031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.271090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.271151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.271213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.271557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.271629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.271683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.271736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.271791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.272199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.272221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.272249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.272264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.272277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.657 [2024-07-25 10:47:56.274129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.274193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.274255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.274300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.274697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.274766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.274820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.274873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.274928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.275270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.275296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.275312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.275327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.275341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.277304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.277368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.277432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.277487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.277853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.277934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.277996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.278051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.278112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.278443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.278468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.278484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.278500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.278515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.280475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.280533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.280595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.280662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.281002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.281073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.281152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.281225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.281271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.281651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.281676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.281692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.281711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.281727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.283713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.283772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.283831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.283884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.284286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.284361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.284408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.284453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.284515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.284921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.284947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.284964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.284980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.284995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.286900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.286959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.287024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.287086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.287505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.287576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.287630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.287684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.287739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.288094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.288140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.658 [2024-07-25 10:47:56.288153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.288166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.288177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.290150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.290214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.290261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.290306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.290673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.290742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.290796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.290849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.290903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.291307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.291330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.291359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.291372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.291385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.293253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.293319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.293392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.293454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.293866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.293934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.293988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.294045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.294099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.294411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.294433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.294448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.294477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.294492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.296480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.296539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.296592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.296651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.297015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.297094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.297187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.297235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.297281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.297594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.297619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.297635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.297651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.297666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.299558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.299617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.299673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.299743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.300110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.300207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.300256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.300318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.300372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.300794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.300818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.300834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.300849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.300863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.302916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.302976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.303034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.303098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.303449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.303531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.303587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.303639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.303691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.304077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.304111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.304130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.304160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.304173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.305977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.306036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.306093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.306179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.306564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.306632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.306687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.306745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.306798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.307171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.307193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.307219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.307232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.307245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.309178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.309242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.309288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.659 [2024-07-25 10:47:56.309333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.309725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.310095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.310174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.310220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.310269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.310657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.310682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.310698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.310712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.310726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.313760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.313821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.313875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.313931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.314217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.314278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.314326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.314371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.314446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.314856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.314883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.314900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.314917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.314932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.316808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.316866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.316918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.316970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.317250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.317309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.317356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.317409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.317474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.317753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.317778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.317800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.317816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.317831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.319283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.319334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.319393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.319448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.319789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.319857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.319912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.319964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.320307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.320683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.320708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.320724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.320740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.320756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.322369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.323650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.323709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.324964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.325248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.325308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.325784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.325841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.326205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.326552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.326579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.326595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.326609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.326632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.328541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.328895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.328952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.329301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.329691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.329760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.330125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.330196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.330526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.330875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.330900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.330916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.330931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.330945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.332932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.333289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.333340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.333702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.334089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.334174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.334518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.334577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.334926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.660 [2024-07-25 10:47:56.335279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.335301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.335314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.335327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.335339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.337325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.337699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.337769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.338132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.338532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.338604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.338955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.339011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.339354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.339750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.339777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.339793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.339808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.339823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.341736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.342091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.342169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.342511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.342845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.342914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.343273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.343347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.343720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.344100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.344148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.344162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.344174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.344187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.346210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.346561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.346618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.346966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.347333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.347390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.347759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.347817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.348192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.348528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.348553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.348568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.348583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.348598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.350589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.350946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.351003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.351354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.351749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.351823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.352189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.352251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.352607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.352984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.353009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.353024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.353039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.353054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.355222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.355586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.355959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.356340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.356762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.356829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.357196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.357250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.661 [2024-07-25 10:47:56.357607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.358017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.358060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.358095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.358160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.358194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.360550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.360920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.361288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.361656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.362059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.362396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.362758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.363115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.364890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.365213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.365235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.365248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.365260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.365272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.368249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.369675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.370030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.370364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.370712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.371075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.372186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.373524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.375089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.375357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.375379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.375392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.375421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.375435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.378525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.921 [2024-07-25 10:47:56.378882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.379241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.379614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.380002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.380875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.382125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.383694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.385218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.385515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.385540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.385557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.385571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.385586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.387356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.387728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.388082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.388425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.388768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.390032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.391585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.393145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.394465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.394784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.394809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.394825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.394846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.394861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.396770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.397149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.397472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.398319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.398633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.400266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.401876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.403405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.404656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.404957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.404982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.404998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.405012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.405026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.407085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.407417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.407806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.409325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.409620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.411390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.413217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.413941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.415200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.415475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.415500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.415515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.415530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.415544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.417768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.418153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.419935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.421381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.421683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.423243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.423859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.425501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.427342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.427640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.427665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.427680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.427695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.427710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.429906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.431070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.432276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.433862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.434160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.435276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.436939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.438277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.439831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.440117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.440157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.440170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.440183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.440195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.442708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.444018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.445632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.447214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.447494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.448464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.449685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.451242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.452808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.922 [2024-07-25 10:47:56.453159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.453181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.453194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.453208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.453220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.457074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.458463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.460018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.461550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.461996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.463859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.465478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.467206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.468955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.469342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.469380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.469393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.469406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.469418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.472402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.474252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.476098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.477931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.478266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.479501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.481075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.482637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.483945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.484308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.484330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.484343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.484356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.484368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.487691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.489271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.490939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.491469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.491760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.493605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.495459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.497320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.497692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.498089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.498122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.498154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.498170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.498184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.501420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.502974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.503975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.505731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.506052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.507619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.509209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.509824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.510204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.510614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.510643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.510660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.510675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.510689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.514181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.515844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.516965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.518215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.518496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.520089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.521255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.521624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.521977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.522337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.522358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.522371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.522383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.522411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.525679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.526301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.527541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.529126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.529389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.531211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.531567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.531920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.532274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.532666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.532692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.532714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.532731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.923 [2024-07-25 10:47:56.532746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.534973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.535325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.535679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.536039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.536371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.536755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.537120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.537465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.537819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.538117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.538156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.538169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.538182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.538194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.540358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.540726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.541078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.542774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.543055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.544620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.546220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.546692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.548192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.548491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.548517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.548532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.548547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.548567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.550899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.551258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.553133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.554652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.554934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.556489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.557037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.558782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.560560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.560840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.560865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.560881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.560896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.560910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.563109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.564246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.565475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.567040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.567313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.568480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.570130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.571468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.573039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.573319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.573340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.573354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.573367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.573379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.575960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.577226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.578793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.580348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.580640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.581785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.583036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.584573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.586145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.586460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.586485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.586502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.586517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.586532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.590574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.592194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.593832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.595513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.595859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.595927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.597225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.598804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.600343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.600643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.600669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.600684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.600699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.600714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.602732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.604429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.605837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.607421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.607712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.608283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.609926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.924 [2024-07-25 10:47:56.611781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.613641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.613926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.613951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.613967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.613982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.613997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.616827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.618082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.619607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.621179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.621494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.622949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.624197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.625748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.627318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.627685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.627710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.627726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.627740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:52.925 [2024-07-25 10:47:56.627756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.631759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.633477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.635219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.636945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.637307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.638924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.640725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.642552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.642617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.642899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.642924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.642940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.642954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.642969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.646034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.646096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.647367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.647434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.647731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.649297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.649362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.650149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.650225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.650513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.650538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.650554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.650568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.650583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.652632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.652698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.653047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.653111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.653452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.654696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.654755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.656280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.656346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.656624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.656658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.656675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.656690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.184 [2024-07-25 10:47:56.656705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.659675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.659736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.660088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.660167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.660540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.660906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.660965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.661308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.661359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.661640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.661666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.661682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.661696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.661711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.664495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.664556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.666226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.666293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.666575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.666947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.667006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.667342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.667393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.667766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.667791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.667806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.667827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.667842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.669531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.669623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.670705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.671936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.671995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.672273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.672306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.672321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.672333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.674226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.674319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.676483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.676543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.676818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.676842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.678493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.678583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.681710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.681769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.681826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.687913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.687972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.690857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.690918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.693766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.693824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.696641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.696700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.699588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.699653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.702559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.702617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.705467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.705533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.708284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.708354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.711301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.711370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.714352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.714422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.717269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.717333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.720175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.720243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.723253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.723322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.726215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.726280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.185 [2024-07-25 10:47:56.728963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:53.790 00:32:53.790 Latency(us) 00:32:53.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.790 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:53.790 Verification LBA range: start 0x0 length 0x100 00:32:53.790 crypto_ram : 5.86 43.65 2.73 0.00 0.00 2845994.86 281173.71 2162396.73 00:32:53.790 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:53.790 Verification LBA range: start 0x100 length 0x100 00:32:53.790 crypto_ram : 5.89 43.48 2.72 0.00 0.00 2855136.52 278066.82 2336382.67 00:32:53.790 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:53.790 Verification LBA range: start 0x0 length 0x100 00:32:53.790 crypto_ram1 : 5.87 43.64 2.73 0.00 0.00 2749930.00 279620.27 1975983.22 00:32:53.790 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:53.790 Verification LBA range: start 0x100 length 0x100 00:32:53.790 crypto_ram1 : 5.89 43.47 2.72 0.00 0.00 2753974.42 276513.37 2137541.59 00:32:53.790 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:53.790 Verification LBA range: start 0x0 length 0x100 00:32:53.790 crypto_ram2 : 5.61 295.06 18.44 0.00 0.00 390606.05 75342.13 587202.56 00:32:53.790 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:53.790 Verification LBA range: start 0x100 length 0x100 00:32:53.790 crypto_ram2 : 5.64 292.94 18.31 0.00 0.00 391382.04 33981.63 590309.45 00:32:53.790 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:53.790 Verification LBA range: start 0x0 length 0x100 00:32:53.790 crypto_ram3 : 5.68 303.76 18.99 0.00 0.00 367123.68 26214.40 400789.05 00:32:53.790 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:53.790 Verification LBA range: start 0x100 length 0x100 00:32:53.790 crypto_ram3 : 5.72 303.69 18.98 0.00 0.00 365290.65 5388.52 478461.35 00:32:53.790 =================================================================================================================== 00:32:53.790 Total : 1369.69 85.61 0.00 0.00 696732.22 5388.52 2336382.67 00:32:54.048 00:32:54.048 real 0m9.218s 00:32:54.048 user 0m17.385s 00:32:54.048 sys 0m0.554s 00:32:54.048 10:47:57 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:54.048 10:47:57 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:32:54.048 ************************************ 00:32:54.048 END TEST bdev_verify_big_io 00:32:54.048 ************************************ 00:32:54.048 10:47:57 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:54.048 10:47:57 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:32:54.048 10:47:57 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:54.048 10:47:57 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:54.048 ************************************ 00:32:54.048 START TEST bdev_write_zeroes 00:32:54.048 ************************************ 00:32:54.048 10:47:57 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:54.307 [2024-07-25 10:47:57.778629] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:32:54.307 [2024-07-25 10:47:57.778693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522758 ] 00:32:54.307 [2024-07-25 10:47:57.861143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.307 [2024-07-25 10:47:57.985252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.307 [2024-07-25 10:47:58.006581] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:32:54.307 [2024-07-25 10:47:58.014617] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:54.565 [2024-07-25 10:47:58.022628] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:54.565 [2024-07-25 10:47:58.142233] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:32:57.094 [2024-07-25 10:48:00.407504] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:32:57.094 [2024-07-25 10:48:00.407583] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:57.094 [2024-07-25 10:48:00.407602] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:57.094 [2024-07-25 10:48:00.415523] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:32:57.094 [2024-07-25 10:48:00.415552] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:57.094 [2024-07-25 10:48:00.415567] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:57.094 [2024-07-25 10:48:00.423544] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:32:57.094 [2024-07-25 10:48:00.423571] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:57.094 [2024-07-25 10:48:00.423586] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:57.094 [2024-07-25 10:48:00.431566] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:32:57.094 [2024-07-25 10:48:00.431592] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:57.094 [2024-07-25 10:48:00.431607] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:57.094 Running I/O for 1 seconds... 00:32:58.028 00:32:58.028 Latency(us) 00:32:58.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.028 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:58.028 crypto_ram : 1.03 1877.90 7.34 0.00 0.00 67630.29 6189.51 82721.00 00:32:58.028 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:58.028 crypto_ram1 : 1.03 1883.70 7.36 0.00 0.00 67011.60 6068.15 76507.21 00:32:58.028 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:58.028 crypto_ram2 : 1.02 14409.98 56.29 0.00 0.00 8738.05 2852.03 11747.93 00:32:58.028 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:58.028 crypto_ram3 : 1.02 14445.43 56.43 0.00 0.00 8683.86 2536.49 9272.13 00:32:58.028 =================================================================================================================== 00:32:58.028 Total : 32617.01 127.41 0.00 0.00 15497.84 2536.49 82721.00 00:32:58.593 00:32:58.593 real 0m4.336s 00:32:58.593 user 0m3.874s 00:32:58.593 sys 0m0.410s 00:32:58.593 10:48:02 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:58.593 10:48:02 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:32:58.593 ************************************ 00:32:58.593 END TEST bdev_write_zeroes 00:32:58.593 ************************************ 00:32:58.593 10:48:02 blockdev_crypto_qat -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:58.593 10:48:02 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:32:58.593 10:48:02 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:58.593 10:48:02 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:58.593 ************************************ 00:32:58.593 START TEST bdev_json_nonenclosed 00:32:58.593 ************************************ 00:32:58.593 10:48:02 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:58.593 [2024-07-25 10:48:02.164491] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:32:58.594 [2024-07-25 10:48:02.164554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523406 ] 00:32:58.594 [2024-07-25 10:48:02.246781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.852 [2024-07-25 10:48:02.371705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.852 [2024-07-25 10:48:02.371817] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:58.852 [2024-07-25 10:48:02.371840] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:58.852 [2024-07-25 10:48:02.371858] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:58.852 00:32:58.852 real 0m0.399s 00:32:58.852 user 0m0.283s 00:32:58.852 sys 0m0.112s 00:32:58.852 10:48:02 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:58.852 10:48:02 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:32:58.852 ************************************ 00:32:58.852 END TEST bdev_json_nonenclosed 00:32:58.852 ************************************ 00:32:58.852 10:48:02 blockdev_crypto_qat -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:58.852 10:48:02 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:32:58.852 10:48:02 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:58.852 10:48:02 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:59.110 ************************************ 00:32:59.110 START TEST bdev_json_nonarray 00:32:59.110 ************************************ 00:32:59.110 10:48:02 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:59.110 [2024-07-25 10:48:02.614921] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:32:59.110 [2024-07-25 10:48:02.614980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523438 ] 00:32:59.110 [2024-07-25 10:48:02.697865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.368 [2024-07-25 10:48:02.823581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.368 [2024-07-25 10:48:02.823688] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:59.368 [2024-07-25 10:48:02.823713] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:59.368 [2024-07-25 10:48:02.823727] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:59.368 00:32:59.368 real 0m0.397s 00:32:59.368 user 0m0.272s 00:32:59.368 sys 0m0.122s 00:32:59.368 10:48:02 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:59.368 10:48:02 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:32:59.369 ************************************ 00:32:59.369 END TEST bdev_json_nonarray 00:32:59.369 ************************************ 00:32:59.369 10:48:02 blockdev_crypto_qat -- bdev/blockdev.sh@786 -- # [[ crypto_qat == bdev ]] 00:32:59.369 10:48:02 blockdev_crypto_qat -- bdev/blockdev.sh@793 -- # [[ crypto_qat == gpt ]] 00:32:59.369 10:48:02 blockdev_crypto_qat -- bdev/blockdev.sh@797 -- # [[ crypto_qat == crypto_sw ]] 00:32:59.369 10:48:02 blockdev_crypto_qat -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:32:59.369 10:48:02 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # cleanup 00:32:59.369 10:48:02 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:32:59.369 10:48:02 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:59.369 10:48:02 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:32:59.369 10:48:02 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:32:59.369 10:48:02 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:32:59.369 10:48:02 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:32:59.369 00:32:59.369 real 1m12.204s 00:32:59.369 user 2m35.572s 00:32:59.369 sys 0m8.326s 00:32:59.369 10:48:02 blockdev_crypto_qat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:59.369 10:48:02 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:59.369 ************************************ 00:32:59.369 END TEST blockdev_crypto_qat 00:32:59.369 ************************************ 00:32:59.369 10:48:03 -- spdk/autotest.sh@364 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:32:59.369 10:48:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:59.369 10:48:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:59.369 10:48:03 -- common/autotest_common.sh@10 -- # set +x 00:32:59.369 ************************************ 00:32:59.369 START TEST chaining 00:32:59.369 ************************************ 00:32:59.369 10:48:03 chaining -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:32:59.369 * Looking for test storage... 00:32:59.369 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:32:59.627 10:48:03 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@7 -- # uname -s 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:32:59.627 10:48:03 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.627 10:48:03 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.627 10:48:03 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.627 10:48:03 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.627 10:48:03 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.627 10:48:03 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.627 10:48:03 chaining -- paths/export.sh@5 -- # export PATH 00:32:59.627 10:48:03 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@47 -- # : 0 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:59.627 10:48:03 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:32:59.627 10:48:03 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:32:59.627 10:48:03 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:32:59.627 10:48:03 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:32:59.627 10:48:03 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:32:59.627 10:48:03 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.627 10:48:03 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:59.627 10:48:03 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:59.627 10:48:03 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:32:59.627 10:48:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@296 -- # e810=() 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@297 -- # x722=() 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@298 -- # mlx=() 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:02.154 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:02.154 10:48:05 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:02.155 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:02.155 Found net devices under 0000:09:00.0: cvl_0_0 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:02.155 Found net devices under 0000:09:00.1: cvl_0_1 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:02.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:02.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:33:02.155 00:33:02.155 --- 10.0.0.2 ping statistics --- 00:33:02.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.155 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:02.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:02.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:33:02.155 00:33:02.155 --- 10.0.0.1 ping statistics --- 00:33:02.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.155 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@422 -- # return 0 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:02.155 10:48:05 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:02.155 10:48:05 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:02.155 10:48:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@481 -- # nvmfpid=2525591 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:02.155 10:48:05 chaining -- nvmf/common.sh@482 -- # waitforlisten 2525591 00:33:02.155 10:48:05 chaining -- common/autotest_common.sh@831 -- # '[' -z 2525591 ']' 00:33:02.155 10:48:05 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.155 10:48:05 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:02.155 10:48:05 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:02.155 10:48:05 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:02.155 10:48:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:02.155 [2024-07-25 10:48:05.761463] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:33:02.155 [2024-07-25 10:48:05.761552] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:02.155 [2024-07-25 10:48:05.850280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.413 [2024-07-25 10:48:05.965349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:02.413 [2024-07-25 10:48:05.965423] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:02.413 [2024-07-25 10:48:05.965440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:02.413 [2024-07-25 10:48:05.965454] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:02.413 [2024-07-25 10:48:05.965470] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:02.413 [2024-07-25 10:48:05.965500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@864 -- # return 0 00:33:03.348 10:48:06 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:03.348 10:48:06 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@69 -- # mktemp 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.RYB2rdSUxl 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@69 -- # mktemp 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.R08l8gygqP 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:03.348 malloc0 00:33:03.348 true 00:33:03.348 true 00:33:03.348 [2024-07-25 10:48:06.763325] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:33:03.348 crypto0 00:33:03.348 [2024-07-25 10:48:06.771332] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:33:03.348 crypto1 00:33:03.348 [2024-07-25 10:48:06.779459] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:03.348 [2024-07-25 10:48:06.795662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@85 -- # update_stats 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:03.348 10:48:06 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.RYB2rdSUxl bs=1K count=64 00:33:03.348 64+0 records in 00:33:03.348 64+0 records out 00:33:03.348 65536 bytes (66 kB, 64 KiB) copied, 0.00039185 s, 167 MB/s 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.RYB2rdSUxl --ob Nvme0n1 --bs 65536 --count 1 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@25 -- # local config 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:33:03.348 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@31 -- # config='{ 00:33:03.348 "subsystems": [ 00:33:03.348 { 00:33:03.348 "subsystem": "bdev", 00:33:03.348 "config": [ 00:33:03.348 { 00:33:03.348 "method": "bdev_nvme_attach_controller", 00:33:03.348 "params": { 00:33:03.348 "trtype": "tcp", 00:33:03.348 "adrfam": "IPv4", 00:33:03.348 "name": "Nvme0", 00:33:03.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:03.348 "traddr": "10.0.0.2", 00:33:03.348 "trsvcid": "4420" 00:33:03.348 } 00:33:03.348 }, 00:33:03.348 { 00:33:03.348 "method": "bdev_set_options", 00:33:03.348 "params": { 00:33:03.348 "bdev_auto_examine": false 00:33:03.348 } 00:33:03.348 } 00:33:03.348 ] 00:33:03.348 } 00:33:03.348 ] 00:33:03.348 }' 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.RYB2rdSUxl --ob Nvme0n1 --bs 65536 --count 1 00:33:03.348 10:48:06 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:33:03.348 "subsystems": [ 00:33:03.348 { 00:33:03.348 "subsystem": "bdev", 00:33:03.348 "config": [ 00:33:03.348 { 00:33:03.348 "method": "bdev_nvme_attach_controller", 00:33:03.348 "params": { 00:33:03.348 "trtype": "tcp", 00:33:03.348 "adrfam": "IPv4", 00:33:03.348 "name": "Nvme0", 00:33:03.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:03.348 "traddr": "10.0.0.2", 00:33:03.348 "trsvcid": "4420" 00:33:03.348 } 00:33:03.348 }, 00:33:03.348 { 00:33:03.348 "method": "bdev_set_options", 00:33:03.348 "params": { 00:33:03.348 "bdev_auto_examine": false 00:33:03.348 } 00:33:03.348 } 00:33:03.348 ] 00:33:03.348 } 00:33:03.348 ] 00:33:03.348 }' 00:33:03.348 [2024-07-25 10:48:07.040220] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:33:03.348 [2024-07-25 10:48:07.040302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525774 ] 00:33:03.607 [2024-07-25 10:48:07.122286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.607 [2024-07-25 10:48:07.248751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.430  Copying: 64/64 [kB] (average 12 MBps) 00:33:04.430 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:04.430 10:48:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:04.430 10:48:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:04.430 10:48:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:04.430 10:48:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.430 10:48:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:04.430 10:48:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:04.430 10:48:07 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:04.431 10:48:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:04.431 10:48:07 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:04.431 10:48:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:04.431 10:48:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.431 10:48:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:04.431 10:48:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:04.431 10:48:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.431 10:48:07 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:33:04.431 10:48:07 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:33:04.431 10:48:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:04.431 10:48:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:04.431 10:48:07 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:04.431 10:48:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:04.431 10:48:07 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:04.431 10:48:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:04.431 10:48:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:04.431 10:48:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.431 10:48:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:04.431 10:48:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@96 -- # update_stats 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:04.431 10:48:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.431 10:48:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:04.431 10:48:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:04.431 10:48:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.431 10:48:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:04.431 10:48:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:04.431 10:48:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.431 10:48:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:04.431 10:48:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.431 10:48:08 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:33:04.688 10:48:08 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:33:04.688 10:48:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:04.688 10:48:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:04.688 10:48:08 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:04.688 10:48:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:04.688 10:48:08 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:04.688 10:48:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:04.688 10:48:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.688 10:48:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:04.688 10:48:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:04.688 10:48:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.688 10:48:08 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:33:04.688 10:48:08 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.R08l8gygqP --ib Nvme0n1 --bs 65536 --count 1 00:33:04.688 10:48:08 chaining -- bdev/chaining.sh@25 -- # local config 00:33:04.688 10:48:08 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:33:04.689 10:48:08 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:33:04.689 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:33:04.689 10:48:08 chaining -- bdev/chaining.sh@31 -- # config='{ 00:33:04.689 "subsystems": [ 00:33:04.689 { 00:33:04.689 "subsystem": "bdev", 00:33:04.689 "config": [ 00:33:04.689 { 00:33:04.689 "method": "bdev_nvme_attach_controller", 00:33:04.689 "params": { 00:33:04.689 "trtype": "tcp", 00:33:04.689 "adrfam": "IPv4", 00:33:04.689 "name": "Nvme0", 00:33:04.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:04.689 "traddr": "10.0.0.2", 00:33:04.689 "trsvcid": "4420" 00:33:04.689 } 00:33:04.689 }, 00:33:04.689 { 00:33:04.689 "method": "bdev_set_options", 00:33:04.689 "params": { 00:33:04.689 "bdev_auto_examine": false 00:33:04.689 } 00:33:04.689 } 00:33:04.689 ] 00:33:04.689 } 00:33:04.689 ] 00:33:04.689 }' 00:33:04.689 10:48:08 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.R08l8gygqP --ib Nvme0n1 --bs 65536 --count 1 00:33:04.689 10:48:08 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:33:04.689 "subsystems": [ 00:33:04.689 { 00:33:04.689 "subsystem": "bdev", 00:33:04.689 "config": [ 00:33:04.689 { 00:33:04.689 "method": "bdev_nvme_attach_controller", 00:33:04.689 "params": { 00:33:04.689 "trtype": "tcp", 00:33:04.689 "adrfam": "IPv4", 00:33:04.689 "name": "Nvme0", 00:33:04.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:04.689 "traddr": "10.0.0.2", 00:33:04.689 "trsvcid": "4420" 00:33:04.689 } 00:33:04.689 }, 00:33:04.689 { 00:33:04.689 "method": "bdev_set_options", 00:33:04.689 "params": { 00:33:04.689 "bdev_auto_examine": false 00:33:04.689 } 00:33:04.689 } 00:33:04.689 ] 00:33:04.689 } 00:33:04.689 ] 00:33:04.689 }' 00:33:04.689 [2024-07-25 10:48:08.270431] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:33:04.689 [2024-07-25 10:48:08.270525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526301 ] 00:33:04.689 [2024-07-25 10:48:08.359971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.946 [2024-07-25 10:48:08.483622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.462  Copying: 64/64 [kB] (average 15 MBps) 00:33:05.462 00:33:05.462 10:48:09 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:33:05.462 10:48:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:05.462 10:48:09 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:05.462 10:48:09 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:05.462 10:48:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:05.462 10:48:09 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:05.462 10:48:09 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:05.462 10:48:09 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.462 10:48:09 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:05.462 10:48:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:05.462 10:48:09 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.462 10:48:09 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:33:05.462 10:48:09 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:33:05.462 10:48:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:05.462 10:48:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:05.463 10:48:09 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:05.463 10:48:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:05.463 10:48:09 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:05.463 10:48:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:05.463 10:48:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:05.463 10:48:09 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.463 10:48:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:05.463 10:48:09 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.721 10:48:09 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:33:05.721 10:48:09 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:33:05.721 10:48:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:05.721 10:48:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:05.721 10:48:09 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:05.721 10:48:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:05.721 10:48:09 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:05.721 10:48:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:05.721 10:48:09 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.721 10:48:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:05.721 10:48:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:05.721 10:48:09 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.721 10:48:09 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:05.722 10:48:09 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.722 10:48:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:05.722 10:48:09 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.RYB2rdSUxl /tmp/tmp.R08l8gygqP 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@25 -- # local config 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:33:05.722 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@31 -- # config='{ 00:33:05.722 "subsystems": [ 00:33:05.722 { 00:33:05.722 "subsystem": "bdev", 00:33:05.722 "config": [ 00:33:05.722 { 00:33:05.722 "method": "bdev_nvme_attach_controller", 00:33:05.722 "params": { 00:33:05.722 "trtype": "tcp", 00:33:05.722 "adrfam": "IPv4", 00:33:05.722 "name": "Nvme0", 00:33:05.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:05.722 "traddr": "10.0.0.2", 00:33:05.722 "trsvcid": "4420" 00:33:05.722 } 00:33:05.722 }, 00:33:05.722 { 00:33:05.722 "method": "bdev_set_options", 00:33:05.722 "params": { 00:33:05.722 "bdev_auto_examine": false 00:33:05.722 } 00:33:05.722 } 00:33:05.722 ] 00:33:05.722 } 00:33:05.722 ] 00:33:05.722 }' 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:33:05.722 10:48:09 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:33:05.722 "subsystems": [ 00:33:05.722 { 00:33:05.722 "subsystem": "bdev", 00:33:05.722 "config": [ 00:33:05.722 { 00:33:05.722 "method": "bdev_nvme_attach_controller", 00:33:05.722 "params": { 00:33:05.722 "trtype": "tcp", 00:33:05.722 "adrfam": "IPv4", 00:33:05.722 "name": "Nvme0", 00:33:05.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:05.722 "traddr": "10.0.0.2", 00:33:05.722 "trsvcid": "4420" 00:33:05.722 } 00:33:05.722 }, 00:33:05.722 { 00:33:05.722 "method": "bdev_set_options", 00:33:05.722 "params": { 00:33:05.722 "bdev_auto_examine": false 00:33:05.722 } 00:33:05.722 } 00:33:05.722 ] 00:33:05.722 } 00:33:05.722 ] 00:33:05.722 }' 00:33:05.722 [2024-07-25 10:48:09.345453] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:33:05.722 [2024-07-25 10:48:09.345527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526603 ] 00:33:05.722 [2024-07-25 10:48:09.427558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.980 [2024-07-25 10:48:09.550051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.495  Copying: 64/64 [kB] (average 12 MBps) 00:33:06.495 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@106 -- # update_stats 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:06.495 10:48:10 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.495 10:48:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:06.495 10:48:10 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:06.495 10:48:10 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.495 10:48:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:06.495 10:48:10 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:06.495 10:48:10 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:06.495 10:48:10 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.495 10:48:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:06.495 10:48:10 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:06.753 10:48:10 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.753 10:48:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:06.753 10:48:10 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.RYB2rdSUxl --ob Nvme0n1 --bs 4096 --count 16 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@25 -- # local config 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:33:06.753 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@31 -- # config='{ 00:33:06.753 "subsystems": [ 00:33:06.753 { 00:33:06.753 "subsystem": "bdev", 00:33:06.753 "config": [ 00:33:06.753 { 00:33:06.753 "method": "bdev_nvme_attach_controller", 00:33:06.753 "params": { 00:33:06.753 "trtype": "tcp", 00:33:06.753 "adrfam": "IPv4", 00:33:06.753 "name": "Nvme0", 00:33:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:06.753 "traddr": "10.0.0.2", 00:33:06.753 "trsvcid": "4420" 00:33:06.753 } 00:33:06.753 }, 00:33:06.753 { 00:33:06.753 "method": "bdev_set_options", 00:33:06.753 "params": { 00:33:06.753 "bdev_auto_examine": false 00:33:06.753 } 00:33:06.753 } 00:33:06.753 ] 00:33:06.753 } 00:33:06.753 ] 00:33:06.753 }' 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.RYB2rdSUxl --ob Nvme0n1 --bs 4096 --count 16 00:33:06.753 10:48:10 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:33:06.753 "subsystems": [ 00:33:06.753 { 00:33:06.753 "subsystem": "bdev", 00:33:06.753 "config": [ 00:33:06.753 { 00:33:06.753 "method": "bdev_nvme_attach_controller", 00:33:06.753 "params": { 00:33:06.753 "trtype": "tcp", 00:33:06.753 "adrfam": "IPv4", 00:33:06.753 "name": "Nvme0", 00:33:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:06.753 "traddr": "10.0.0.2", 00:33:06.753 "trsvcid": "4420" 00:33:06.753 } 00:33:06.753 }, 00:33:06.753 { 00:33:06.753 "method": "bdev_set_options", 00:33:06.753 "params": { 00:33:06.753 "bdev_auto_examine": false 00:33:06.753 } 00:33:06.753 } 00:33:06.753 ] 00:33:06.753 } 00:33:06.753 ] 00:33:06.753 }' 00:33:06.753 [2024-07-25 10:48:10.340042] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:33:06.753 [2024-07-25 10:48:10.340134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526756 ] 00:33:06.753 [2024-07-25 10:48:10.421465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.011 [2024-07-25 10:48:10.547126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.528  Copying: 64/64 [kB] (average 10 MBps) 00:33:07.528 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:07.528 10:48:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.528 10:48:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:07.528 10:48:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:07.528 10:48:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.528 10:48:11 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:07.528 10:48:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:07.528 10:48:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@114 -- # update_stats 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:07.786 10:48:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@117 -- # : 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.R08l8gygqP --ib Nvme0n1 --bs 4096 --count 16 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@25 -- # local config 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:33:07.786 10:48:11 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:33:07.786 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:33:08.044 10:48:11 chaining -- bdev/chaining.sh@31 -- # config='{ 00:33:08.044 "subsystems": [ 00:33:08.044 { 00:33:08.044 "subsystem": "bdev", 00:33:08.044 "config": [ 00:33:08.044 { 00:33:08.044 "method": "bdev_nvme_attach_controller", 00:33:08.044 "params": { 00:33:08.044 "trtype": "tcp", 00:33:08.044 "adrfam": "IPv4", 00:33:08.044 "name": "Nvme0", 00:33:08.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:08.044 "traddr": "10.0.0.2", 00:33:08.044 "trsvcid": "4420" 00:33:08.044 } 00:33:08.044 }, 00:33:08.044 { 00:33:08.044 "method": "bdev_set_options", 00:33:08.044 "params": { 00:33:08.044 "bdev_auto_examine": false 00:33:08.044 } 00:33:08.044 } 00:33:08.044 ] 00:33:08.044 } 00:33:08.044 ] 00:33:08.044 }' 00:33:08.044 10:48:11 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.R08l8gygqP --ib Nvme0n1 --bs 4096 --count 16 00:33:08.044 10:48:11 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:33:08.044 "subsystems": [ 00:33:08.044 { 00:33:08.044 "subsystem": "bdev", 00:33:08.044 "config": [ 00:33:08.044 { 00:33:08.044 "method": "bdev_nvme_attach_controller", 00:33:08.044 "params": { 00:33:08.044 "trtype": "tcp", 00:33:08.044 "adrfam": "IPv4", 00:33:08.044 "name": "Nvme0", 00:33:08.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:08.044 "traddr": "10.0.0.2", 00:33:08.044 "trsvcid": "4420" 00:33:08.044 } 00:33:08.044 }, 00:33:08.044 { 00:33:08.044 "method": "bdev_set_options", 00:33:08.044 "params": { 00:33:08.044 "bdev_auto_examine": false 00:33:08.044 } 00:33:08.044 } 00:33:08.044 ] 00:33:08.044 } 00:33:08.044 ] 00:33:08.044 }' 00:33:08.044 [2024-07-25 10:48:11.563792] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:33:08.044 [2024-07-25 10:48:11.563862] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526932 ] 00:33:08.044 [2024-07-25 10:48:11.644966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.302 [2024-07-25 10:48:11.768583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.817  Copying: 64/64 [kB] (average 744 kBps) 00:33:08.817 00:33:08.817 10:48:12 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:33:08.817 10:48:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:08.817 10:48:12 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:08.817 10:48:12 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:08.817 10:48:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:08.817 10:48:12 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:08.817 10:48:12 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:33:08.817 10:48:12 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:08.817 10:48:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.817 10:48:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.RYB2rdSUxl /tmp/tmp.R08l8gygqP 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.RYB2rdSUxl /tmp/tmp.R08l8gygqP 00:33:09.076 10:48:12 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:33:09.076 10:48:12 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:09.076 10:48:12 chaining -- nvmf/common.sh@117 -- # sync 00:33:09.076 10:48:12 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:09.076 10:48:12 chaining -- nvmf/common.sh@120 -- # set +e 00:33:09.076 10:48:12 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:09.076 10:48:12 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:09.076 rmmod nvme_tcp 00:33:09.076 rmmod nvme_fabrics 00:33:09.076 rmmod nvme_keyring 00:33:09.076 10:48:12 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:09.076 10:48:12 chaining -- nvmf/common.sh@124 -- # set -e 00:33:09.076 10:48:12 chaining -- nvmf/common.sh@125 -- # return 0 00:33:09.076 10:48:12 chaining -- nvmf/common.sh@489 -- # '[' -n 2525591 ']' 00:33:09.076 10:48:12 chaining -- nvmf/common.sh@490 -- # killprocess 2525591 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@950 -- # '[' -z 2525591 ']' 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@954 -- # kill -0 2525591 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@955 -- # uname 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:09.076 10:48:12 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2525591 00:33:09.077 10:48:12 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:09.077 10:48:12 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:09.077 10:48:12 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2525591' 00:33:09.077 killing process with pid 2525591 00:33:09.077 10:48:12 chaining -- common/autotest_common.sh@969 -- # kill 2525591 00:33:09.077 10:48:12 chaining -- common/autotest_common.sh@974 -- # wait 2525591 00:33:09.335 10:48:13 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:09.335 10:48:13 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:09.335 10:48:13 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:09.335 10:48:13 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:09.335 10:48:13 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:09.335 10:48:13 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.335 10:48:13 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:09.335 10:48:13 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.866 10:48:15 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:11.866 10:48:15 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:33:11.866 10:48:15 chaining -- bdev/chaining.sh@132 -- # bperfpid=2527380 00:33:11.866 10:48:15 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:33:11.866 10:48:15 chaining -- bdev/chaining.sh@134 -- # waitforlisten 2527380 00:33:11.866 10:48:15 chaining -- common/autotest_common.sh@831 -- # '[' -z 2527380 ']' 00:33:11.866 10:48:15 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.866 10:48:15 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:11.866 10:48:15 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.866 10:48:15 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:11.866 10:48:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:11.866 [2024-07-25 10:48:15.125203] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:33:11.866 [2024-07-25 10:48:15.125298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2527380 ] 00:33:11.866 [2024-07-25 10:48:15.208495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.866 [2024-07-25 10:48:15.329767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.432 10:48:16 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:12.432 10:48:16 chaining -- common/autotest_common.sh@864 -- # return 0 00:33:12.432 10:48:16 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:33:12.432 10:48:16 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.432 10:48:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:12.691 malloc0 00:33:12.691 true 00:33:12.691 true 00:33:12.691 [2024-07-25 10:48:16.227483] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:33:12.691 crypto0 00:33:12.691 [2024-07-25 10:48:16.235504] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:33:12.691 crypto1 00:33:12.691 10:48:16 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.691 10:48:16 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:12.691 Running I/O for 5 seconds... 00:33:17.977 00:33:17.977 Latency(us) 00:33:17.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.977 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:33:17.977 Verification LBA range: start 0x0 length 0x2000 00:33:17.977 crypto1 : 5.01 12285.40 47.99 0.00 0.00 20775.75 4441.88 14175.19 00:33:17.977 =================================================================================================================== 00:33:17.977 Total : 12285.40 47.99 0.00 0.00 20775.75 4441.88 14175.19 00:33:17.977 0 00:33:17.977 10:48:21 chaining -- bdev/chaining.sh@146 -- # killprocess 2527380 00:33:17.977 10:48:21 chaining -- common/autotest_common.sh@950 -- # '[' -z 2527380 ']' 00:33:17.977 10:48:21 chaining -- common/autotest_common.sh@954 -- # kill -0 2527380 00:33:17.977 10:48:21 chaining -- common/autotest_common.sh@955 -- # uname 00:33:17.977 10:48:21 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:17.977 10:48:21 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2527380 00:33:17.977 10:48:21 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:17.977 10:48:21 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:17.977 10:48:21 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2527380' 00:33:17.977 killing process with pid 2527380 00:33:17.977 10:48:21 chaining -- common/autotest_common.sh@969 -- # kill 2527380 00:33:17.977 Received shutdown signal, test time was about 5.000000 seconds 00:33:17.977 00:33:17.977 Latency(us) 00:33:17.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.977 =================================================================================================================== 00:33:17.977 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:17.977 10:48:21 chaining -- common/autotest_common.sh@974 -- # wait 2527380 00:33:18.235 10:48:21 chaining -- bdev/chaining.sh@152 -- # bperfpid=2528174 00:33:18.235 10:48:21 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:33:18.235 10:48:21 chaining -- bdev/chaining.sh@154 -- # waitforlisten 2528174 00:33:18.235 10:48:21 chaining -- common/autotest_common.sh@831 -- # '[' -z 2528174 ']' 00:33:18.235 10:48:21 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.235 10:48:21 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:18.236 10:48:21 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.236 10:48:21 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:18.236 10:48:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:18.236 [2024-07-25 10:48:21.748004] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:33:18.236 [2024-07-25 10:48:21.748088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528174 ] 00:33:18.236 [2024-07-25 10:48:21.832685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.493 [2024-07-25 10:48:21.953153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.493 10:48:21 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:18.493 10:48:21 chaining -- common/autotest_common.sh@864 -- # return 0 00:33:18.493 10:48:21 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:33:18.493 10:48:21 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.493 10:48:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:18.493 malloc0 00:33:18.493 true 00:33:18.493 true 00:33:18.493 [2024-07-25 10:48:22.134369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:33:18.493 [2024-07-25 10:48:22.134434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:18.493 [2024-07-25 10:48:22.134460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22de370 00:33:18.493 [2024-07-25 10:48:22.134475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:18.493 [2024-07-25 10:48:22.135564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:18.493 [2024-07-25 10:48:22.135591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:33:18.493 pt0 00:33:18.493 [2024-07-25 10:48:22.142397] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:33:18.493 crypto0 00:33:18.493 [2024-07-25 10:48:22.150417] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:33:18.493 crypto1 00:33:18.493 10:48:22 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.493 10:48:22 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:18.751 Running I/O for 5 seconds... 00:33:24.050 00:33:24.050 Latency(us) 00:33:24.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.050 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:33:24.050 Verification LBA range: start 0x0 length 0x2000 00:33:24.050 crypto1 : 5.01 9906.84 38.70 0.00 0.00 25775.76 5315.70 16796.63 00:33:24.050 =================================================================================================================== 00:33:24.050 Total : 9906.84 38.70 0.00 0.00 25775.76 5315.70 16796.63 00:33:24.050 0 00:33:24.050 10:48:27 chaining -- bdev/chaining.sh@167 -- # killprocess 2528174 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@950 -- # '[' -z 2528174 ']' 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@954 -- # kill -0 2528174 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@955 -- # uname 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2528174 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2528174' 00:33:24.050 killing process with pid 2528174 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@969 -- # kill 2528174 00:33:24.050 Received shutdown signal, test time was about 5.000000 seconds 00:33:24.050 00:33:24.050 Latency(us) 00:33:24.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.050 =================================================================================================================== 00:33:24.050 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@974 -- # wait 2528174 00:33:24.050 10:48:27 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:33:24.050 10:48:27 chaining -- bdev/chaining.sh@170 -- # killprocess 2528174 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@950 -- # '[' -z 2528174 ']' 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@954 -- # kill -0 2528174 00:33:24.050 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2528174) - No such process 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@977 -- # echo 'Process with pid 2528174 is not found' 00:33:24.050 Process with pid 2528174 is not found 00:33:24.050 10:48:27 chaining -- bdev/chaining.sh@171 -- # wait 2528174 00:33:24.050 10:48:27 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:33:24.050 10:48:27 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@296 -- # e810=() 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@297 -- # x722=() 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@298 -- # mlx=() 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:24.050 10:48:27 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:24.051 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:24.051 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:24.051 Found net devices under 0000:09:00.0: cvl_0_0 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:24.051 Found net devices under 0000:09:00.1: cvl_0_1 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:24.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:24.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:33:24.051 00:33:24.051 --- 10.0.0.2 ping statistics --- 00:33:24.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.051 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:24.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:24.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:33:24.051 00:33:24.051 --- 10.0.0.1 ping statistics --- 00:33:24.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.051 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@422 -- # return 0 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:24.051 10:48:27 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:24.309 10:48:27 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:33:24.309 10:48:27 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:24.309 10:48:27 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:24.309 10:48:27 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:24.309 10:48:27 chaining -- nvmf/common.sh@481 -- # nvmfpid=2528865 00:33:24.309 10:48:27 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:24.309 10:48:27 chaining -- nvmf/common.sh@482 -- # waitforlisten 2528865 00:33:24.309 10:48:27 chaining -- common/autotest_common.sh@831 -- # '[' -z 2528865 ']' 00:33:24.309 10:48:27 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.309 10:48:27 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:24.309 10:48:27 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.309 10:48:27 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:24.309 10:48:27 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:24.309 [2024-07-25 10:48:27.822644] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:33:24.309 [2024-07-25 10:48:27.822721] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.309 [2024-07-25 10:48:27.907178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.567 [2024-07-25 10:48:28.022160] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.567 [2024-07-25 10:48:28.022225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.567 [2024-07-25 10:48:28.022262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:24.567 [2024-07-25 10:48:28.022274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.567 [2024-07-25 10:48:28.022284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.567 [2024-07-25 10:48:28.022312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.131 10:48:28 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:25.131 10:48:28 chaining -- common/autotest_common.sh@864 -- # return 0 00:33:25.131 10:48:28 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:25.131 10:48:28 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:25.132 10:48:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:25.132 10:48:28 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.132 10:48:28 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:33:25.132 10:48:28 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.132 10:48:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:25.132 malloc0 00:33:25.132 [2024-07-25 10:48:28.795061] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.132 [2024-07-25 10:48:28.811257] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.132 10:48:28 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.132 10:48:28 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:33:25.132 10:48:28 chaining -- bdev/chaining.sh@189 -- # bperfpid=2529020 00:33:25.132 10:48:28 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:33:25.132 10:48:28 chaining -- bdev/chaining.sh@191 -- # waitforlisten 2529020 /var/tmp/bperf.sock 00:33:25.132 10:48:28 chaining -- common/autotest_common.sh@831 -- # '[' -z 2529020 ']' 00:33:25.132 10:48:28 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:25.132 10:48:28 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:25.132 10:48:28 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:25.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:25.132 10:48:28 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:25.132 10:48:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:25.389 [2024-07-25 10:48:28.873333] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:33:25.389 [2024-07-25 10:48:28.873412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2529020 ] 00:33:25.389 [2024-07-25 10:48:28.954794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.390 [2024-07-25 10:48:29.075306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.322 10:48:29 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:26.322 10:48:29 chaining -- common/autotest_common.sh@864 -- # return 0 00:33:26.322 10:48:29 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:33:26.322 10:48:29 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:33:26.580 [2024-07-25 10:48:30.235847] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:33:26.580 nvme0n1 00:33:26.580 true 00:33:26.580 crypto0 00:33:26.580 10:48:30 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:26.837 Running I/O for 5 seconds... 00:33:32.095 00:33:32.095 Latency(us) 00:33:32.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.095 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:33:32.095 Verification LBA range: start 0x0 length 0x2000 00:33:32.095 crypto0 : 5.02 8581.06 33.52 0.00 0.00 29738.00 3835.07 24272.59 00:33:32.095 =================================================================================================================== 00:33:32.095 Total : 8581.06 33.52 0.00 0.00 29738.00 3835.07 24272.59 00:33:32.095 0 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@205 -- # sequence=86178 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:32.095 10:48:35 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:32.353 10:48:35 chaining -- bdev/chaining.sh@206 -- # encrypt=43089 00:33:32.353 10:48:35 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:33:32.353 10:48:35 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:33:32.353 10:48:35 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:32.353 10:48:35 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:32.353 10:48:35 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:32.353 10:48:35 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:32.353 10:48:35 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:32.353 10:48:35 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:33:32.353 10:48:35 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:32.353 10:48:35 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:32.609 10:48:36 chaining -- bdev/chaining.sh@207 -- # decrypt=43089 00:33:32.609 10:48:36 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:33:32.609 10:48:36 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:33:32.609 10:48:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:32.609 10:48:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:32.609 10:48:36 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:33:32.609 10:48:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:32.609 10:48:36 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:33:32.609 10:48:36 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:33:32.609 10:48:36 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:32.610 10:48:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:33:32.867 10:48:36 chaining -- bdev/chaining.sh@208 -- # crc32c=86178 00:33:32.867 10:48:36 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:33:32.867 10:48:36 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:33:32.867 10:48:36 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:33:32.867 10:48:36 chaining -- bdev/chaining.sh@214 -- # killprocess 2529020 00:33:32.867 10:48:36 chaining -- common/autotest_common.sh@950 -- # '[' -z 2529020 ']' 00:33:32.867 10:48:36 chaining -- common/autotest_common.sh@954 -- # kill -0 2529020 00:33:32.867 10:48:36 chaining -- common/autotest_common.sh@955 -- # uname 00:33:32.867 10:48:36 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:32.867 10:48:36 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2529020 00:33:32.867 10:48:36 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:32.867 10:48:36 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:32.867 10:48:36 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2529020' 00:33:32.867 killing process with pid 2529020 00:33:32.867 10:48:36 chaining -- common/autotest_common.sh@969 -- # kill 2529020 00:33:32.867 Received shutdown signal, test time was about 5.000000 seconds 00:33:32.867 00:33:32.867 Latency(us) 00:33:32.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.867 =================================================================================================================== 00:33:32.867 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:32.867 10:48:36 chaining -- common/autotest_common.sh@974 -- # wait 2529020 00:33:33.124 10:48:36 chaining -- bdev/chaining.sh@219 -- # bperfpid=2529949 00:33:33.124 10:48:36 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:33:33.124 10:48:36 chaining -- bdev/chaining.sh@221 -- # waitforlisten 2529949 /var/tmp/bperf.sock 00:33:33.124 10:48:36 chaining -- common/autotest_common.sh@831 -- # '[' -z 2529949 ']' 00:33:33.124 10:48:36 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:33.124 10:48:36 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:33.124 10:48:36 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:33.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:33.124 10:48:36 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:33.124 10:48:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:33.124 [2024-07-25 10:48:36.758701] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:33:33.124 [2024-07-25 10:48:36.758762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2529949 ] 00:33:33.382 [2024-07-25 10:48:36.835719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.382 [2024-07-25 10:48:36.946135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.382 10:48:36 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:33.382 10:48:36 chaining -- common/autotest_common.sh@864 -- # return 0 00:33:33.382 10:48:36 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:33:33.382 10:48:36 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:33:33.946 [2024-07-25 10:48:37.401090] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:33:33.946 nvme0n1 00:33:33.946 true 00:33:33.946 crypto0 00:33:33.946 10:48:37 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:33.946 Running I/O for 5 seconds... 00:33:39.207 00:33:39.207 Latency(us) 00:33:39.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.207 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:33:39.207 Verification LBA range: start 0x0 length 0x200 00:33:39.207 crypto0 : 5.01 1693.03 105.81 0.00 0.00 18567.95 1201.49 21456.97 00:33:39.207 =================================================================================================================== 00:33:39.207 Total : 1693.03 105.81 0.00 0.00 18567.95 1201.49 21456.97 00:33:39.207 0 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@39 -- # opcode= 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@233 -- # sequence=16960 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:33:39.207 10:48:42 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:39.464 10:48:43 chaining -- bdev/chaining.sh@234 -- # encrypt=8480 00:33:39.464 10:48:43 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:33:39.464 10:48:43 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:33:39.464 10:48:43 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:39.464 10:48:43 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:39.464 10:48:43 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:33:39.464 10:48:43 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:39.464 10:48:43 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:33:39.464 10:48:43 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:33:39.464 10:48:43 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:33:39.464 10:48:43 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:39.722 10:48:43 chaining -- bdev/chaining.sh@235 -- # decrypt=8480 00:33:39.722 10:48:43 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:33:39.722 10:48:43 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:33:39.722 10:48:43 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:33:39.722 10:48:43 chaining -- bdev/chaining.sh@39 -- # event=executed 00:33:39.722 10:48:43 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:33:39.722 10:48:43 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:33:39.722 10:48:43 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:33:39.722 10:48:43 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:33:39.722 10:48:43 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:39.722 10:48:43 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:33:39.979 10:48:43 chaining -- bdev/chaining.sh@236 -- # crc32c=16960 00:33:39.979 10:48:43 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:33:39.979 10:48:43 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:33:39.979 10:48:43 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:33:39.979 10:48:43 chaining -- bdev/chaining.sh@242 -- # killprocess 2529949 00:33:39.979 10:48:43 chaining -- common/autotest_common.sh@950 -- # '[' -z 2529949 ']' 00:33:39.979 10:48:43 chaining -- common/autotest_common.sh@954 -- # kill -0 2529949 00:33:39.979 10:48:43 chaining -- common/autotest_common.sh@955 -- # uname 00:33:39.979 10:48:43 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:39.979 10:48:43 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2529949 00:33:39.979 10:48:43 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:39.979 10:48:43 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:39.979 10:48:43 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2529949' 00:33:39.979 killing process with pid 2529949 00:33:39.979 10:48:43 chaining -- common/autotest_common.sh@969 -- # kill 2529949 00:33:39.979 Received shutdown signal, test time was about 5.000000 seconds 00:33:39.979 00:33:39.979 Latency(us) 00:33:39.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.979 =================================================================================================================== 00:33:39.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.979 10:48:43 chaining -- common/autotest_common.sh@974 -- # wait 2529949 00:33:40.237 10:48:43 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:33:40.237 10:48:43 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:40.237 10:48:43 chaining -- nvmf/common.sh@117 -- # sync 00:33:40.237 10:48:43 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:40.237 10:48:43 chaining -- nvmf/common.sh@120 -- # set +e 00:33:40.237 10:48:43 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:40.237 10:48:43 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:40.237 rmmod nvme_tcp 00:33:40.237 rmmod nvme_fabrics 00:33:40.237 rmmod nvme_keyring 00:33:40.237 10:48:43 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:40.237 10:48:43 chaining -- nvmf/common.sh@124 -- # set -e 00:33:40.237 10:48:43 chaining -- nvmf/common.sh@125 -- # return 0 00:33:40.237 10:48:43 chaining -- nvmf/common.sh@489 -- # '[' -n 2528865 ']' 00:33:40.237 10:48:43 chaining -- nvmf/common.sh@490 -- # killprocess 2528865 00:33:40.237 10:48:43 chaining -- common/autotest_common.sh@950 -- # '[' -z 2528865 ']' 00:33:40.237 10:48:43 chaining -- common/autotest_common.sh@954 -- # kill -0 2528865 00:33:40.237 10:48:43 chaining -- common/autotest_common.sh@955 -- # uname 00:33:40.237 10:48:43 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:40.237 10:48:43 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2528865 00:33:40.237 10:48:43 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:40.237 10:48:43 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:40.237 10:48:43 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2528865' 00:33:40.237 killing process with pid 2528865 00:33:40.237 10:48:43 chaining -- common/autotest_common.sh@969 -- # kill 2528865 00:33:40.237 10:48:43 chaining -- common/autotest_common.sh@974 -- # wait 2528865 00:33:40.802 10:48:44 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:40.803 10:48:44 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:40.803 10:48:44 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:40.803 10:48:44 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:40.803 10:48:44 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:40.803 10:48:44 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.803 10:48:44 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:40.803 10:48:44 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.700 10:48:46 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:42.700 10:48:46 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:33:42.700 00:33:42.700 real 0m43.233s 00:33:42.700 user 0m57.958s 00:33:42.700 sys 0m7.468s 00:33:42.700 10:48:46 chaining -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:42.700 10:48:46 chaining -- common/autotest_common.sh@10 -- # set +x 00:33:42.700 ************************************ 00:33:42.700 END TEST chaining 00:33:42.700 ************************************ 00:33:42.700 10:48:46 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:42.700 10:48:46 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:42.700 10:48:46 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:42.700 10:48:46 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:33:42.700 10:48:46 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:33:42.700 10:48:46 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:33:42.700 10:48:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:42.700 10:48:46 -- common/autotest_common.sh@10 -- # set +x 00:33:42.700 10:48:46 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:33:42.700 10:48:46 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:42.700 10:48:46 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:42.700 10:48:46 -- common/autotest_common.sh@10 -- # set +x 00:33:44.597 INFO: APP EXITING 00:33:44.597 INFO: killing all VMs 00:33:44.597 INFO: killing vhost app 00:33:44.597 INFO: EXIT DONE 00:33:45.970 Waiting for block devices as requested 00:33:45.970 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:45.970 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:45.970 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:45.970 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:45.970 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:46.229 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:46.229 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:46.229 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:46.229 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:33:46.487 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:46.487 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:46.487 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:46.744 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:46.744 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:46.744 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:47.002 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:47.002 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:48.901 Cleaning 00:33:48.901 Removing: /var/run/dpdk/spdk0/config 00:33:48.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:48.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:48.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:48.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:48.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:48.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:48.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:48.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:48.901 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:48.901 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:48.901 Removing: /dev/shm/nvmf_trace.0 00:33:48.901 Removing: /dev/shm/spdk_tgt_trace.pid2296433 00:33:48.901 Removing: /var/run/dpdk/spdk0 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2294229 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2295619 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2296433 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2297001 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2297686 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2297836 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2298549 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2298688 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2298934 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2301228 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2302632 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2302913 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2303166 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2303497 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2304121 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2304482 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2304634 00:33:48.901 Removing: /var/run/dpdk/spdk_pid2304820 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2305149 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2307644 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2307806 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2307999 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2308292 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2308322 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2308512 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2308671 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2308911 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2309106 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2309259 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2309531 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2309694 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2309851 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2310123 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2310288 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2310505 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2310723 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2310883 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2311149 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2311318 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2311472 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2311744 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2311909 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2312181 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2312342 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2312502 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2312783 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2313067 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2313348 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2313637 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2313923 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2314210 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2314492 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2314777 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2314849 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2315190 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2315571 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2315860 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2316014 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2319626 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2321140 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2322611 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2323378 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2324322 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2324605 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2324633 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2324780 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2328819 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2329128 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2330053 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2330221 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2334983 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2336391 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2337277 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2341035 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2342457 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2343349 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2347114 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2349304 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2350195 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2359442 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2361450 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2362350 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2371352 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2373247 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2374276 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2383619 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2386601 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2387499 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2397394 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2399637 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2400671 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2411344 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2413662 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2414691 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2424609 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2428168 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2429315 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2430714 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2433608 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2438442 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2440797 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2444774 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2447845 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2453029 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2456044 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2461764 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2463893 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2469511 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2471701 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2477531 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2480067 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2483957 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2484238 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2484551 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2484915 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2485382 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2485977 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2486539 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2486883 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2488368 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2489844 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2491321 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2492580 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2493935 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2495408 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2496885 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2498139 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2498684 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2499099 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2501054 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2503223 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2504895 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2505936 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2506999 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2507538 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2507569 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2507646 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2507921 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2508063 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2509179 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2510577 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2511865 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2512544 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2513338 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2513497 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2513642 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2513663 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2514518 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2515058 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2515470 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2517317 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2518977 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2520627 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2521689 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2522758 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2523406 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2523438 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2525774 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2526301 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2526603 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2526756 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2526932 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2527380 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2528174 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2529020 00:33:48.902 Removing: /var/run/dpdk/spdk_pid2529949 00:33:48.902 Clean 00:33:49.160 10:48:52 -- common/autotest_common.sh@1451 -- # return 0 00:33:49.160 10:48:52 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:33:49.160 10:48:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:49.160 10:48:52 -- common/autotest_common.sh@10 -- # set +x 00:33:49.160 10:48:52 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:33:49.160 10:48:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:49.160 10:48:52 -- common/autotest_common.sh@10 -- # set +x 00:33:49.160 10:48:52 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:33:49.160 10:48:52 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:33:49.160 10:48:52 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:33:49.160 10:48:52 -- spdk/autotest.sh@395 -- # hash lcov 00:33:49.160 10:48:52 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:49.160 10:48:52 -- spdk/autotest.sh@397 -- # hostname 00:33:49.160 10:48:52 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:33:49.418 geninfo: WARNING: invalid characters removed from testname! 00:34:15.963 10:49:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:34:19.258 10:49:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:34:21.850 10:49:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:34:25.142 10:49:28 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:34:27.678 10:49:30 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:34:30.211 10:49:33 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:34:33.499 10:49:36 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:33.499 10:49:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:33.499 10:49:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:33.499 10:49:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.499 10:49:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.499 10:49:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.499 10:49:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.499 10:49:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.499 10:49:36 -- paths/export.sh@5 -- $ export PATH 00:34:33.499 10:49:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.499 10:49:36 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:33.499 10:49:36 -- common/autobuild_common.sh@447 -- $ date +%s 00:34:33.499 10:49:36 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721897376.XXXXXX 00:34:33.499 10:49:36 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721897376.mJfEZI 00:34:33.499 10:49:36 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:34:33.499 10:49:36 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:34:33.499 10:49:36 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:34:33.499 10:49:36 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:33.499 10:49:36 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:33.499 10:49:36 -- common/autobuild_common.sh@463 -- $ get_config_params 00:34:33.499 10:49:36 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:34:33.499 10:49:36 -- common/autotest_common.sh@10 -- $ set +x 00:34:33.499 10:49:36 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:34:33.499 10:49:36 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:34:33.499 10:49:36 -- pm/common@17 -- $ local monitor 00:34:33.499 10:49:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.499 10:49:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.499 10:49:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.499 10:49:36 -- pm/common@21 -- $ date +%s 00:34:33.499 10:49:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.499 10:49:36 -- pm/common@21 -- $ date +%s 00:34:33.499 10:49:36 -- pm/common@25 -- $ sleep 1 00:34:33.499 10:49:36 -- pm/common@21 -- $ date +%s 00:34:33.499 10:49:36 -- pm/common@21 -- $ date +%s 00:34:33.499 10:49:36 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721897376 00:34:33.499 10:49:36 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721897376 00:34:33.499 10:49:36 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721897376 00:34:33.499 10:49:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721897376 00:34:33.499 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721897376_collect-vmstat.pm.log 00:34:33.499 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721897376_collect-cpu-load.pm.log 00:34:33.499 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721897376_collect-cpu-temp.pm.log 00:34:33.499 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721897376_collect-bmc-pm.bmc.pm.log 00:34:34.065 10:49:37 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:34:34.065 10:49:37 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:34:34.065 10:49:37 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:34.065 10:49:37 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:34.065 10:49:37 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:34.065 10:49:37 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:34.065 10:49:37 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:34.065 10:49:37 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:34.065 10:49:37 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:34:34.065 10:49:37 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:34.065 10:49:37 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:34.065 10:49:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:34.065 10:49:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:34.065 10:49:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.065 10:49:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:34.065 10:49:37 -- pm/common@44 -- $ pid=2541593 00:34:34.065 10:49:37 -- pm/common@50 -- $ kill -TERM 2541593 00:34:34.065 10:49:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.065 10:49:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:34.065 10:49:37 -- pm/common@44 -- $ pid=2541595 00:34:34.065 10:49:37 -- pm/common@50 -- $ kill -TERM 2541595 00:34:34.065 10:49:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.065 10:49:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:34.065 10:49:37 -- pm/common@44 -- $ pid=2541597 00:34:34.065 10:49:37 -- pm/common@50 -- $ kill -TERM 2541597 00:34:34.065 10:49:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.065 10:49:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:34.065 10:49:37 -- pm/common@44 -- $ pid=2541627 00:34:34.065 10:49:37 -- pm/common@50 -- $ sudo -E kill -TERM 2541627 00:34:34.065 + [[ -n 2203188 ]] 00:34:34.065 + sudo kill 2203188 00:34:34.073 [Pipeline] } 00:34:34.088 [Pipeline] // stage 00:34:34.094 [Pipeline] } 00:34:34.109 [Pipeline] // timeout 00:34:34.113 [Pipeline] } 00:34:34.128 [Pipeline] // catchError 00:34:34.133 [Pipeline] } 00:34:34.149 [Pipeline] // wrap 00:34:34.154 [Pipeline] } 00:34:34.169 [Pipeline] // catchError 00:34:34.176 [Pipeline] stage 00:34:34.178 [Pipeline] { (Epilogue) 00:34:34.192 [Pipeline] catchError 00:34:34.193 [Pipeline] { 00:34:34.206 [Pipeline] echo 00:34:34.207 Cleanup processes 00:34:34.212 [Pipeline] sh 00:34:34.492 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:34.492 2541740 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:34:34.492 2541859 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:34.504 [Pipeline] sh 00:34:34.785 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:34.785 ++ grep -v 'sudo pgrep' 00:34:34.785 ++ awk '{print $1}' 00:34:34.785 + sudo kill -9 2541740 00:34:34.796 [Pipeline] sh 00:34:35.076 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:45.052 [Pipeline] sh 00:34:45.334 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:45.334 Artifacts sizes are good 00:34:45.348 [Pipeline] archiveArtifacts 00:34:45.354 Archiving artifacts 00:34:45.495 [Pipeline] sh 00:34:45.776 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:34:45.789 [Pipeline] cleanWs 00:34:45.798 [WS-CLEANUP] Deleting project workspace... 00:34:45.798 [WS-CLEANUP] Deferred wipeout is used... 00:34:45.805 [WS-CLEANUP] done 00:34:45.807 [Pipeline] } 00:34:45.826 [Pipeline] // catchError 00:34:45.838 [Pipeline] sh 00:34:46.115 + logger -p user.info -t JENKINS-CI 00:34:46.123 [Pipeline] } 00:34:46.138 [Pipeline] // stage 00:34:46.143 [Pipeline] } 00:34:46.163 [Pipeline] // node 00:34:46.168 [Pipeline] End of Pipeline 00:34:46.210 Finished: SUCCESS